[Trad] [svn:pgfr] r1329 - traduc/branches/slony_1_2

admin at listes.postgresql.fr admin at listes.postgresql.fr
Sam 23 Mai 14:31:51 CEST 2009


Author: gleu
Date: 2009-05-23 14:31:51 +0200 (Sat, 23 May 2009)
New Revision: 1329

Modified:
   traduc/branches/slony_1_2/addthings.xml
   traduc/branches/slony_1_2/adminscripts.xml
   traduc/branches/slony_1_2/bestpractices.xml
   traduc/branches/slony_1_2/cluster.xml
   traduc/branches/slony_1_2/concepts.xml
   traduc/branches/slony_1_2/defineset.xml
   traduc/branches/slony_1_2/dropthings.xml
   traduc/branches/slony_1_2/failover.xml
   traduc/branches/slony_1_2/filelist.xml
   traduc/branches/slony_1_2/firstdb.xml
   traduc/branches/slony_1_2/help.xml
   traduc/branches/slony_1_2/installation.xml
   traduc/branches/slony_1_2/intro.xml
   traduc/branches/slony_1_2/legal.xml
   traduc/branches/slony_1_2/listenpaths.xml
   traduc/branches/slony_1_2/locking.xml
   traduc/branches/slony_1_2/loganalysis.xml
   traduc/branches/slony_1_2/logshipping.xml
   traduc/branches/slony_1_2/maintenance.xml
   traduc/branches/slony_1_2/monitoring.xml
   traduc/branches/slony_1_2/partitioning.xml
   traduc/branches/slony_1_2/prerequisites.xml
   traduc/branches/slony_1_2/releasechecklist.xml
   traduc/branches/slony_1_2/reshape.xml
   traduc/branches/slony_1_2/slon.xml
   traduc/branches/slony_1_2/slonconf.xml
   traduc/branches/slony_1_2/slonik_ref.xml
   traduc/branches/slony_1_2/slony.xml
   traduc/branches/slony_1_2/slonyupgrade.xml
   traduc/branches/slony_1_2/subscribenodes.xml
   traduc/branches/slony_1_2/supportedplatforms.xml
   traduc/branches/slony_1_2/testbed.xml
   traduc/branches/slony_1_2/usingslonik.xml
   traduc/branches/slony_1_2/version.xml
   traduc/branches/slony_1_2/versionupgrade.xml
Log:
Merge Slony 1.2.16.


Modified: traduc/branches/slony_1_2/addthings.xml
===================================================================
--- traduc/branches/slony_1_2/addthings.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/addthings.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -336,6 +336,15 @@
       pas de supprimer le schéma et son contenu, elle supprime également toutes
       les colonnes ajoutées avec la commande <xref linkend= "stmttableaddkey"/>.
     </para>
+
+    <note>
+      <para>
+        In &slony1; version 2.0, <xref linkend="stmttableaddkey"/> is
+	<emphasis>no longer supported</emphasis>, and thus <xref
+	linkend="stmtuninstallnode"/> consists very simply of
+	<command>DROP SCHEMA "_ClusterName" CASCADE;</command>.
+      </para>
+    </note>
   </listitem>
 </itemizedlist>
 
@@ -411,7 +420,7 @@
 
   <listitem>
     <para>
-      À cet instant, lancer le script <command>test_slony_state-dbi.pl</command>
+      À cet instant, lancer le script &lteststate;
       est une excellente idée. Ce script parcourt le cluster tout entier et
       pointe les anomalies qu'il trouve. Il peut notamment identifier une grande
       variété de problèmes de communication.
@@ -504,7 +513,7 @@
 
   <listitem>
     <para>
-      Lancer le script <command>test_slony_state-dbi.pl</command> qui se trouve
+      Lancer le script &lteststate; qui se trouve
       dans le répertoire <filename>tools</filename>. Ce script parcourt le cluster
       tout entier et pointe les anomalies qu'il détecte, ainsi que des
       informations sur le statut de chaque n&oelig;ud.

Modified: traduc/branches/slony_1_2/adminscripts.xml
===================================================================
--- traduc/branches/slony_1_2/adminscripts.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/adminscripts.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -206,7 +206,7 @@
 
 </sect3>
 
-<sect3>
+<sect3 id="slonik-drop-node">
 <title>slonik_drop_node</title>
 
 <para>
@@ -214,9 +214,22 @@
   cluster &slony1;.
 </para>
 
+<para>
+  This represents a pretty big potential <quote>foot gun</quote>
+  as this eliminates a replication set all at once.  A typo that points
+  it to the wrong set could be rather damaging.  Compare to <xref
+  linkend="slonik-unsubscribe-set"/> and <xref
+  linkend="slonik-drop-node"/>; with both of those, attempting to drop a
+  subscription or a node that is vital to your operations will be
+  blocked (via a foreign key constraint violation) if there exists a
+  downstream subscriber that would be adversely affected.  In contrast,
+  there will be no warnings or errors if you drop a set; the set will
+  simply disappear from replication.
+</para>
+
 </sect3>
 
-<sect3>
+<sect3 id="slonik-drop-set">
 <title>slonik_drop_set</title>
 
 <para>
@@ -400,13 +413,14 @@
 <para>
   Cette commande parcourt le cluster et supprime le schéma &slony1; sur tous
   les n&oelig;uds. Vous pouvez utiliser cet outil si vous souhaitez détruire
-  la réplication sur l'ensemble du cluster. Il s'agit d'un script
-  <emphasis>TRÈS</emphasis> dangereux&nbsp;!
+  la réplication sur l'ensemble du cluster. Comme ce script détruit des
+  informations, il s'agit d'un script <emphasis>TRÈS</emphasis>
+  dangereux&nbsp;!
 </para>
 
 </sect3>
     
-<sect3>
+<sect3 id="slonik-unsubscribe-set">
 <title>slonik_unsubscribe_set</title>
 
 <para>
@@ -562,15 +576,59 @@
 
 </sect2>
 
+<sect2 id="startslon"> <title>start_slon.sh</title>
+
+<para> This <filename>rc.d</filename>-style script was introduced in
+&slony1; version 2.0; it provides automatable ways of:</para>
+
+<itemizedlist>
+<listitem><para>Starting the &lslon;, via <command> start_slon.sh start </command> </para> </listitem>
+</itemizedlist>
+<para> Attempts to start the &lslon;, checking first to verify that it
+is not already running, that configuration exists, and that the log
+file location is writable.  Failure cases include:</para>
+
+<itemizedlist>
+<listitem><para> No <link linkend="runtime-config"> slon runtime configuration file </link> exists, </para></listitem>
+<listitem><para> A &lslon; is found with the PID indicated via the runtime configuration, </para></listitem>
+<listitem><para> The specified <envar>SLON_LOG</envar> location is not writable. </para></listitem>
+<listitem><para>Stopping the &lslon;, via <command> start_slon.sh stop </command> </para> 
+<para> This fails (doing nothing) if the PID (indicated via the runtime configuration file) does not exist; </para> </listitem>
+<listitem><para>Monitoring the status of the &lslon;, via <command> start_slon.sh status </command> </para> 
+<para> This indicates whether or not the &lslon; is running, and, if so, prints out the process ID. </para> </listitem>
+
+</itemizedlist>
+
+<para> The following environment variables are used to control &lslon; configuration:</para>
+
+<glosslist>
+<glossentry><glossterm> <envar> SLON_BIN_PATH </envar> </glossterm>
+<glossdef><para> This indicates where the &lslon; binary program is found. </para> </glossdef> </glossentry>
+<glossentry><glossterm> <envar> SLON_CONF </envar> </glossterm>
+<glossdef><para> This indicates the location of the <link linkend="runtime-config"> slon runtime configuration file </link> that controls how the &lslon; behaves. </para> 
+<para> Note that this file is <emphasis>required</emphasis> to contain a value for <link linkend="slon-config-logging-pid-file">log_pid_file</link>; that is necessary to allow this script to detect whether the &lslon; is running or not. </para>
+</glossdef> </glossentry>
+<glossentry><glossterm> <envar> SLON_LOG </envar> </glossterm>
+<glossdef><para> This file is the location where &lslon; log files are to be stored, if need be.  There is an option <xref linkend ="slon-config-logging-syslog"/> for &lslon; to use <application>syslog</application> to manage logging; in that case, you may prefer to set <envar>SLON_LOG</envar> to <filename>/dev/null</filename>.  </para> </glossdef> </glossentry>
+</glosslist>
+
+<para> Note that these environment variables may either be set, in the
+script, or overridden by values passed in from the environment.  The
+latter usage makes it easy to use this script in conjunction with the
+<xref linkend="testbed"/> so that it is regularly tested. </para>
+
+</sect2>
+
 <sect2 id="launchclusters">
 <title>launch_clusters.sh</title>
 <indexterm><primary>lancer un cluster &slony1; cluster en utilisant les fichiers slon.conf</primary></indexterm>
 
 <para>
   Voici un autre script shell qui utilise la configuration produite par
-  <filename>mkslonconf.sh</filename> et qui peut être utilisé lors du démarrage
-  du système, à la suite des processus <filename>rc.d</filename> ou dans un
-  processus cron, pour s'assurer que les processus &lslon; fonctionnent.
+  <filename>mkslonconf.sh</filename> et qui a pour but de support an
+  approach to running &slony1; involving regularly
+  (<emphasis>e.g.</emphasis> via a cron process) checking to ensure that
+  &lslon; processes are running.
 </para>
 
 <para>
@@ -703,23 +761,6 @@
   </listitem>
 </itemizedlist>
 
-<note>
-  <para>
-    Ce script fonctionne correctement uniquement lorsqu'il est exécuté sur un
-    n&oelig;ud <emphasis>origine</emphasis>.
-  </para>
-</note>
-
-<warning>
-  <para>
-    Si ce script est exécuté sur un n&oelig;ud <emphasis>abonné</emphasis>,
-    le <command>pg_dump</command> utilisé pour dessiner le schéma à partir du
-    n&oelig;ud source tentera de récupérer le schéma <emphasis>cassé</emphasis>
-    trouvé sur l'abonne et, du coup, le résultat ne sera <emphasis>pas</emphasis>
-    une représentation fidèle du schéma disponible sur le n&oelig;ud origine.
-  </para>
-</warning>
-
 </sect2>
 
 <sect2>
@@ -965,7 +1006,7 @@
 
   <listitem>
     <para>
-      <filename>create_set.slonik</filename>
+      <filename>create_nodes.slonik</filename>
     </para>
 
     <para>
@@ -1081,6 +1122,8 @@
 <title><filename>slon.in-profiles</filename></title>
 <subtitle>profiles dans le style d'Apache pour FreeBSD <filename>ports/databases/slony/*</filename></subtitle>
 
+<indexterm><primary> Apache-style profiles for FreeBSD </primary> <secondary>FreeBSD </secondary> </indexterm>
+
 <para>
   Dans le répertoire <filename>tools</filename>, le script
   <filename>slon.in-profiles</filename> permet de lancer des instances &lslon;
@@ -1090,4 +1133,63 @@
 
 </sect2>
 
+<sect2 id="duplicate-node">
+<title><filename> duplicate-node.sh </filename></title>
+
+<indexterm><primary> duplicating nodes </primary> </indexterm>
+
+<para> In the <filename>tools</filename> area,
+<filename>duplicate-node.sh</filename> is a script that may be used to
+help create a new node that duplicates one of the ones in the
+cluster. </para>
+
+<para> The script expects the following parameters: </para>
+<itemizedlist>
+<listitem><para> Cluster name </para> </listitem>
+<listitem><para> New node number </para> </listitem>
+<listitem><para> Origin node </para> </listitem>
+<listitem><para> Node being duplicated </para> </listitem>
+<listitem><para> New node </para> </listitem>
+</itemizedlist>
+
+<para> For each of the nodes specified, the script offers flags to
+specify <function>libpq</function>-style parameters for
+<envar>PGHOST</envar>, <envar>PGPORT</envar>,
+<envar>PGDATABASE</envar>, and <envar>PGUSER</envar>; it is expected
+that <filename>.pgpass</filename> will be used for storage of
+passwords, as is generally considered best practice. Those values may
+inherit from the <function>libpq</function> environment variables, if
+not set, which is useful when using this for testing.  When
+<quote>used in anger,</quote> however, it is likely that nearly all of
+the 14 available parameters should be used. </para>
+
+<para> The script prepares files, normally in
+<filename>/tmp</filename>, and will report the name of the directory
+that it creates that contain SQL and &lslonik; scripts to set up the
+new node. </para>
+
+<itemizedlist>
+<listitem><para> <filename> schema.sql </filename> </para> 
+<para> This is drawn from the origin node, and contains the <quote>pristine</quote> database schema that must be applied first.</para></listitem>
+<listitem><para> <filename> slonik.preamble </filename> </para> 
+
+<para> This <quote>preamble</quote> is used by the subsequent set of slonik scripts. </para> </listitem>
+<listitem><para> <filename> step1-storenode.slonik </filename> </para> 
+<para> A &lslonik; script to set up the new node. </para> </listitem>
+<listitem><para> <filename> step2-storepath.slonik </filename> </para> 
+<para> A &lslonik; script to set up path communications between the provider node and the new node. </para> </listitem>
+<listitem><para> <filename> step3-subscribe-sets.slonik </filename> </para> 
+<para> A &lslonik; script to request subscriptions for all replications sets.</para> </listitem>
+</itemizedlist>
+
+<para> For testing purposes, this is sufficient to get a new node working.  The configuration may not necessarily reflect what is desired as a final state:</para>
+
+<itemizedlist>
+<listitem><para> Additional communications paths may be desirable in order to have redundancy. </para> </listitem>
+<listitem><para> It is assumed, in the generated scripts, that the new node should support forwarding; that may not be true. </para> </listitem>
+<listitem><para> It may be desirable later, after the subscription process is complete, to revise subscriptions. </para> </listitem>
+</itemizedlist>
+
+</sect2>
+
 </sect1>

Modified: traduc/branches/slony_1_2/bestpractices.xml
===================================================================
--- traduc/branches/slony_1_2/bestpractices.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/bestpractices.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -149,9 +149,8 @@
         <para>
 	  Le système va périodiquement faire un troncage (en utilisant
           <command>TRUNCATE</command> pour nettoyer l'ancienne table) entre les
-	  deux tables de logs, <xref linkend="table.sl-log-1"/> et <xref
-	  linkend="table.sl-log-2"/>, évitant une croissance illimitée de
-	  l'espace <quote>mort</quote> à cet endroit.
+	  deux tables de logs, &sllog1; et &sllog2;, évitant une croissance
+	  illimitée de l'espace <quote>mort</quote> à cet endroit.
 	</para>
       </listitem>
     </itemizedlist>
@@ -164,6 +163,13 @@
     </para>
 
     <para>
+      Most pointedly, any node that is expected to be a failover
+      target must have its subscription(s) set up with the option
+      <command>FORWARD = YES</command>.  Otherwise, that node is not a
+      candidate for being promoted to origin node.
+    </para>
+
+    <para>
       Cela peut simplement se résumer à réfléchir à une liste de priorités
       indiquant qui devrait basculer vers quoi, plutôt que d'essayer
       d'automatiser la bascule. Quoiqu'il en soit, savoir au préalable ce
@@ -205,6 +211,21 @@
 
   <listitem>
     <para>
+      If you are using the autovacuum process in recent
+      versions of &postgres;, you may wish to leave &slony1; tables out, as
+      &slony1; is a bit more intelligent about vacuuming when it is expected
+      to be conspicuously useful (<emphasis>e.g.</emphasis> - immediately
+      after purging old data) to do so than autovacuum can be.
+    </para>
+
+    <para>
+      See <xref linkend="maintenance-autovac"/> for more
+      details.
+    </para>
+  </listitem>
+
+  <listitem>
+    <para>
       Il a été prouvé qu'il est préférable d'exécuter tous les démons &lslon;
       sur un serveur central pour chaque sous-réseau.
     </para>
@@ -234,10 +255,13 @@
       réseau</quote> que le n&oelig;ud d'origine, afin que la liaison entre
       eux soit une connexion <quote>locale</quote>. N'établissez
       <emphasis>pas</emphasis> ce genre de liaison à travers un réseau WAN.
+      Thus, if you have nodes in London and nodes in New
+      York, the &lslon;s managing London nodes should run in London, and the
+      &lslon;s managing New York nodes should run in New York.
     </para>
 
     <para>
-      Une coupure de lien WAN  peut provoquer des connexions
+      Une coupure de lien WAN (or flakiness of the WAN in general) peut provoquer des connexions
       <quote>zombies</quote>, et le comportement typique du TCP/IP consiste à
       <link linkend="multipleslonconnections">laisser ces connexions persister,
       empêchant le démon slon de redémarrer pendant environ deux heures</link>.
@@ -270,7 +294,7 @@
     </para>
 
     <para>
-      L'exception qui rend un redémarrage de &lslon; indésirable est le cas où
+      Le scénario exceptionnel qui rend un redémarrage de &lslon; indésirable est le cas où
       une commande <command>COPY_SET</command> est en cours d'exécution sur un
       grand ensemble de réplication. Dans ce genre de cas, arrêter un &lslon;
       peut annuler plusieurs heures de travail.
@@ -314,6 +338,14 @@
       clef. Ceci entraînerait potentiellement des bogues dans votre application
       à cause de &slony1;.
     </para>
+    
+    <warning>
+      <para>
+        In version 2 of &slony1;, <xref linkend="stmttableaddkey"/> is no longer
+	supported.  You <emphasis>must</emphasis> have either a true primary key
+	or a candidate primary key.
+      </para>
+    </warning>
   </listitem>
 
   <listitem>
@@ -387,8 +419,10 @@
       verrou exclusif sur ces objets&nbsp;; ainsi le <command>script
       d'exécution des modifications</command> entraîne un verrou exclusif sur
       <emphasis>toutes</emphasis> les tables répliquées. Cela peut s'avérer
-      très problématique lorsque les applications fonctionnent&nbsp;; des
-      inter-blocages («&nbsp;deadlocks&nbsp;») peuvent alors se produire.
+      très problématique lorsque les applications fonctionnent when running
+      DDL; &slony1; is asking for those exclusive table locks, whilst,
+      simultaneously, some application connections are gradually relinquishing
+      locks, whilst others are backing up behind the &slony1; locks.
     </para>
 
     <para>
@@ -636,8 +670,8 @@
 
   <listitem>
     <para>
-      Utilisez <filename>test_slony_state.pl</filename> pour rechercher les
-      problèmes de configuration.
+      Exécutez &lteststate; fréquemment pour découvrir les problèmes de
+      configuration aussi rapidement que possible.
     </para>
 
     <para>
@@ -656,6 +690,14 @@
       Si, de manière mystérieuse, la réplication <quote>ne marche pas</quote>,
       cet outil peut vérifier beaucoup de problèmes potentiels pour vous.
     </para>
+
+    <para>
+      It will also notice a number of sorts of situations where
+      something has broken.  Not only should it be run when problems have
+      been noticed - it should be run frequently (<emphasis>e.g.</emphasis>
+      - hourly, or thereabouts) as a general purpose <quote>health
+      check</quote> for each &slony1; cluster.
+    </para>
   </listitem>
     
   <listitem>
@@ -714,6 +756,16 @@
       verrouiller l'accès au n&oelig;ud pour tous les utilisateurs autres que
       <command>slony</command> car&nbsp;:
     </para>
+
+    <para>
+      It is also a very good idea to change &lslon; configuration for
+      <xref linkend="slon-config-sync-interval"/> on the origin node to
+      reduce how many <command>SYNC</command> events are generated.  If the
+      subscription takes 8 hours, there is little sense in there being 28800
+      <command>SYNC</command>s waiting to be applied.  Running a
+      <command>SYNC</command> every minute or so is likely to make catching
+      up easier.
+    </para>
   </listitem>
 </itemizedlist>
 
@@ -829,8 +881,8 @@
   
     <para>
       Parallèlement, on constate une croissance <emphasis>énorme</emphasis>
-      des tables <xref linkend="table.sl-log-1"/> et <xref
-      linkend="table.sl-seqlog"/>. Malheureusement, une fois que
+      des tables &sllog1;, &sllog2; et &slseqlog;. Malheureusement, une fois
+      que
       <command>COPY_SET</command> est terminé, on constate que les requêtes
       sur ces tables se font via des <command>parcours séquentiels</command>.
       Même si le <command>SYNC</command> ne traite qu'une petite partie de

Modified: traduc/branches/slony_1_2/cluster.xml
===================================================================
--- traduc/branches/slony_1_2/cluster.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/cluster.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -19,10 +19,9 @@
   qui stockent la configuration de &slony1; et les informations sur l'état
   de la réplication.
   Consultez <xref linkend="schema"/> pour plus d'informations sur 
-  ce qui est stocké dans ce schéma. Plus précisément, les tables
-  <xref linkend="table.sl-log-1"/> et <xref linkend="table.sl-log-2"/>
-  tracent les modifications collectées sur le n&oelig;ud d'origine afin
-  qu'elles soient répliquées sur les n&oelig;uds abonnés.
+  ce qui est stocké dans ce schéma. Plus précisément, les tables &sllog1; et
+  &sllog2; tracent les modifications collectées sur le n&oelig;ud d'origine
+  afin qu'elles soient répliquées sur les n&oelig;uds abonnés.
 </para>
 
 <para>
@@ -36,6 +35,13 @@
 </para>
 
 <para>
+  Note that, as recorded in the <xref linkend="faq"/> under <link
+  linkend="cannotrenumbernodes"> How can I renumber nodes?</link>, the
+  node number is immutable, so it is not possible to change a node's
+  node number after it has been set up.
+</para>
+
+<para>
   Une réflexion doit être menée, dans des cas plus complexes,
   afin de s'assurer que le système de numérotation reste cohérent,
   sans quoi les administrateurs deviendront fous. Les numéros de n&oelig;ud

Modified: traduc/branches/slony_1_2/concepts.xml
===================================================================
--- traduc/branches/slony_1_2/concepts.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/concepts.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -56,7 +56,7 @@
 </para>
 
 <programlisting>
-cluster name = 'quelque_chose';
+cluster name = quelque_chose;
 </programlisting>
 
 <para>

Modified: traduc/branches/slony_1_2/defineset.xml
===================================================================
--- traduc/branches/slony_1_2/defineset.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/defineset.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -108,12 +108,16 @@
   <listitem>
     <para>
       Si la table n'a pas de clef primaire candidate, vous devez demander à
-      &slony1; d'en fournir une. Tout d'abord, vous devez utiliser <xref
-      linkend="stmttableaddkey"/> pour ajouter une colonne peuplée en utilisant
-      une séquence &slony1;. Ensuite, <xref linkend="stmtsetaddtable"/> inclut
-      la directive <option>key=serial</option> pour indiquer que la propre
-      colonne de &slony1; doit être utilisé.
+      &slony1; d'en fournir une en utilisant <xref linkend="stmttableaddkey"/>.
     </para>
+
+    <warning>
+      <para>
+        <xref linkend="stmttableaddkey"/> was always considered a
+	<quote>kludge</quote>, at best, and as of version 2.0, it is considered
+	such a misfeature that it is being removed.
+      </para>
+    </warning>
   </listitem>
 </itemizedlist>
 
@@ -168,6 +172,18 @@
     </para>
 
     <para>
+      Another issue comes up particularly frequently when replicating
+      across a WAN; sometimes the network connection is a little bit
+      unstable, such that there is a risk that a connection held open for
+      several hours will lead to <command>CONNECTION TIMEOUT.</command> If
+      that happens when 95% done copying a 50-table replication set
+      consisting of 250GB of data, that could ruin your whole day.  If the
+      tables were, instead, associated with separate replication sets, that
+      failure at the 95% point might only interrupt, temporarily, the
+      copying of <emphasis>one</emphasis> of those tables.
+    </para>
+
+    <para>
       Certains <quote>effets négatifs</quote> surviennent lorsque la base de
       données répliquée contient plusieurs Go de données, et qu'il faut des
       heures ou des jours pour qu'un n&oelig;ud abonné réalise une copie
@@ -223,7 +239,7 @@
   Chaque fois qu'un évènement SYNC est traité, les valeurs sont enregistrées
   pour <emphasis>toutes</emphasis> les séquences de l'ensemble de réplication.
   Si vous avez beaucoup de séquences, cela peut augmenter fortement la
-  volumétrie de la table <xref linkend="table.sl-seqlog"/> .
+  volumétrie de la table &slseqlog;.
 </para>
 
 <para>
@@ -244,12 +260,12 @@
       <para>
         Si elle n'est jamais mise à jour, le trigger de la table sur le
 	n&oelig;ud origine n'est jamais déclenché, et aucune entrée n'est
-	ajoutée dans <xref linkend="table.sl-log-1"/>. La table n'apparaît
+	ajoutée dans &sllog1;/&sllog2;. La table n'apparaît
 	jamais dans aucune des requêtes de réplication (<emphasis>par
 	exemple&nbsp;:</emphasis> dans les requêtes <command>FETCH 100 FROM
 	LOG</command> utilisées pour trouver les données à répliquer) car elles
 	ne recherchent que les tables qui ont des entrées dans
-        <xref linkend="table.sl-log-1"/>.
+        &sllog1;/&sllog2;.
       </para>
     </listitem>
 
@@ -261,7 +277,9 @@
 
       <para>
         Pour répliquer 300 séquences, 300 lignes doivent être ajoutées dans la
-	<xref linkend="table.sl-seqlog"/> de manière régulière.
+	&slseqlog; de manière régulière, at least, thru until the 2.0 branch,
+        where updates are only applied when the value of a given sequence is
+        seen to change.
       </para>
 
       <para>

Modified: traduc/branches/slony_1_2/dropthings.xml
===================================================================
--- traduc/branches/slony_1_2/dropthings.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/dropthings.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -214,4 +214,17 @@
 
 </sect2>
 
+<sect2>
+<title> Verifying Cluster Health </title>
+
+<para>
+  After performing any of these procedures, it is an excellent
+  idea to run the <filename>tools</filename> script &lteststate;, which
+  rummages through the state of the entire cluster, pointing out any
+  anomalies that it finds.  This includes a variety of sorts of
+  communications problems.
+</para>
+
+</sect2>
+
 </sect1>

Modified: traduc/branches/slony_1_2/failover.xml
===================================================================
--- traduc/branches/slony_1_2/failover.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/failover.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -60,11 +60,69 @@
   <listitem>
     <para>
       Au moment ou nous écrivons ces lignes, basculer vers un autre serveur
-      nécessite que l'application se reconnecte à la base de donnée.
+      nécessite que l'application se reconnecte à la nouvelle base de donnée.
       Donc, pour éviter toute complication, nous éteignons le serveur web. Les
       utilisateurs qui ont installé <application>pgpool</application> pour
       gérer les connexions peuvent simplement éteindre le pool.
     </para>
+    
+    <para>
+      What needs to be done, here, is highly dependent on the way
+      that the application(s) that use the database are configured.  The
+      general point is thus: Applications that were connected to the old
+      database must drop those connections and establish new connections to
+      the database that has been promoted to the <quote>master</quote>.  There
+      are a number of ways that this may be configured, and therefore, a
+      number of possible methods for accomplishing the change:
+    </para>
+
+    <itemizedlist>
+
+      <listitem>
+        <para>
+	  The application may store the name of the database in a file.
+	</para>
+
+        <para>
+	  In that case, the reconfiguration may require changing the
+          value in the file, and stopping and restarting the application to get
+          it to point to the new location.
+        </para>
+      </listitem>
+
+      <listitem>
+        <para>
+	  A clever usage of DNS might involve creating a CNAME
+          <ulink url="http://www.iana.org/assignments/dns-parameters"> DNS
+          record </ulink> that establishes a name for the application to use to
+          reference the node that is in the <quote>master</quote> role.
+	</para>
+
+        <para>
+	  In that case, reconfiguration would require changing the CNAME
+          to point to the new server, and possibly restarting the application to
+          refresh database connections.
+        </para>
+      </listitem>
+
+      <listitem>
+        <para>
+	  If you are using <application>pg_pool</application> or some
+          similar <quote>connection pool manager,</quote> then the reconfiguration
+          involves reconfiguring this management tool, but is otherwise similar
+          to the DNS/CNAME example above.
+	</para>
+      </listitem>
+
+    </itemizedlist>
+
+    <para>
+      Whether or not the application that accesses the database needs
+      to be restarted depends on how it is coded to cope with failed
+      database connections; if, after encountering an error it tries
+      re-opening them, then there may be no need to restart it.
+    </para>
+
   </listitem>
   
   <listitem>
@@ -118,6 +176,13 @@
   n'implique aucune perte de données.
 </para>
 
+<para>
+  After performing the configuration change, you should, as <xref
+  linkend="bestpractices"/>, run the &lteststate; scripts in order to
+  validate that the cluster state remains in good order after this
+  change.
+</para>
+
 </sect2>
 
 <sect2>
@@ -173,6 +238,18 @@
       de bascule d'urgence est complétée, plus aucun n&oelig;ud du cluster ne
       reçoit d'information de la part du n&oelig;ud 1.
     </para>
+    
+    <note>
+      <para>
+        Note that in order for node 2 to be considered as a
+        candidate for failover, it must have been set up with the <xref
+        linkend="stmtsubscribeset"/> option <command>forwarding =
+        yes</command>, which has the effect that replication log data is
+        collected in &sllog1;/&sllog2; on node 2.  If replication log data is
+        <emphasis>not</emphasis> being collected, then failover to that node
+        is not possible.
+      </para>
+    </note>
   </listitem>
 
   <listitem>
@@ -192,7 +269,7 @@
       références au n&oelig;ud 1 dans la table <xref linkend="table.sl-node"/>,
       ainsi que ses tables associées telle que <xref
       linkend="table.sl-confirm"/>&nbsp;; puisque des données sont toujours
-      présentes dans <xref linkend="table.sl-log-1"/>, &slony1; ne peut pas
+      présentes dans &sllog1;/&sllog2;, &slony1; ne peut pas
       purger immédiatement le n&oelig;ud.
     </para>
 
@@ -215,10 +292,101 @@
       linkend="rebuildnode1"/> pour plus de détails sur ce que cela implique.
     </para>
   </listitem>
+  
+  <listitem>
+    <para>
+      After performing the configuration change, you should, as <xref
+      linkend="bestpractices"/>, run the &lteststate; scripts in order to
+      validate that the cluster state remains in good order after this change.
+    </para>
+  </listitem>
 </itemizedlist>
 
 </sect2>
 
+<sect2 id="complexfailover"> <title> Failover With Complex Node Set </title>
+
+<para> Failover is relatively <quote>simple</quote> if there are only two
+nodes; if a &slony1; cluster comprises many nodes, achieving a clean
+failover requires careful planning and execution. </para>
+
+<para> Consider the following diagram describing a set of six nodes at two sites.
+
+<inlinemediaobject>
+  <imageobject>
+    <imagedata fileref="complexenv.png"/>
+  </imageobject>
+  <textobject>
+    <phrase> Symmetric Multisites</phrase>
+  </textobject>
+</inlinemediaobject>
+
+</para>
+
+<para> Let us assume that nodes 1, 2, and 3 reside at one data
+centre, and that we find ourselves needing to perform failover due to
+failure of that entire site.  Causes could range from a persistent
+loss of communications to the physical destruction of the site; the
+cause is not actually important, as what we are concerned about is how
+to get &slony1; to properly fail over to the new site.</para>
+
+<para> We will further assume that node 5 is to be the new origin,
+after failover. </para>
+
+<para> The sequence of &slony1; reconfiguration required to properly
+failover this sort of node configuration is as follows:
+</para>
+
+<itemizedlist>
+
+<listitem><para> Resubscribe (using <xref linkend="stmtsubscribeset"/>
+ech node that is to be kept in the reformation of the cluster that is
+not already subscribed to the intended data provider.  </para>
+
+<para> In the example cluster, this means we would likely wish to
+resubscribe nodes 4 and 6 to both point to node 5.</para>
+
+<programlisting>
+   include &lt;/tmp/failover-preamble.slonik&gt;;
+   subscribe set (id = 1, provider = 5, receiver = 4);
+   subscribe set (id = 1, provider = 5, receiver = 4);
+</programlisting>
+
+</listitem>
+<listitem><para> Drop all unimportant nodes, starting with leaf nodes.</para>
+
+<para> Since nodes 1, 2, and 3 are unaccessible, we must indicate the
+<envar>EVENT NODE</envar> so that the event reaches the still-live
+portions of the cluster. </para>
+
+<programlisting>
+   include &lt;/tmp/failover-preamble.slonik&gt;;
+   drop node (id=2, event node = 4);
+   drop node (id=3, event node = 4);
+</programlisting>
+
+</listitem>
+
+<listitem><para> Now, run <command>FAILOVER</command>.</para>
+
+<programlisting>
+   include &lt;/tmp/failover-preamble.slonik&gt;;
+   failover (id = 1, backup node = 5);
+</programlisting>
+
+</listitem>
+
+<listitem><para> Finally, drop the former origin from the cluster.</para>
+
+<programlisting>
+   include &lt;/tmp/failover-preamble.slonik&gt;;
+   drop node (id=1, event node = 4);
+</programlisting>
+</listitem>
+
+</itemizedlist>
+</sect2>
+
 <sect2>
 <title>Automatisation de la commande <command>FAIL OVER</command></title>
 <indexterm><primary>automatisation des bascules d'urgence</primary></indexterm>

Modified: traduc/branches/slony_1_2/filelist.xml
===================================================================
--- traduc/branches/slony_1_2/filelist.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/filelist.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -48,7 +48,9 @@
 <!ENTITY loganalysis        SYSTEM "loganalysis.xml">
 <!ENTITY slonyupgrade       SYSTEM "slonyupgrade.xml">
 <!ENTITY releasechecklist   SYSTEM "releasechecklist.xml">
+<!ENTITY raceconditions     SYSTEM "raceconditions.xml">
 <!ENTITY partitioning       SYSTEM "partitioning.xml">
+<!ENTITY triggers           SYSTEM "triggers.xml">
 
 <!-- specifique PGFR -->
 <!ENTITY    frenchtranslation        SYSTEM "frenchtranslation.xml">

Modified: traduc/branches/slony_1_2/firstdb.xml
===================================================================
--- traduc/branches/slony_1_2/firstdb.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/firstdb.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -41,8 +41,14 @@
     <listitem>
       <para>
         Vous avez la ligne <option>tcpip_socket=true</option> dans votre
-        <filename>postgresql.conf</filename> et
+        <filename>postgresql.conf</filename>&nbsp;;
       </para>
+      
+      <note>
+        <para>
+	  This is no longer needed for &postgres; 8.0 and later versions.
+	</para>
+      </note>
     </listitem>
 
     <listitem>
@@ -126,6 +132,24 @@
 createdb -O $PGBENCHUSER -h $SLAVEHOST $SLAVEDBNAME
 pgbench -i -s 1 -U $PGBENCHUSER -h $MASTERHOST $MASTERDBNAME</programlisting>
 
+<para> One of the tables created by
+<application>pgbench</application>, <envar>history</envar>, does not
+have a primary key.  In earlier versions of &slony1;, a &lslonik;
+command called <xref linkend="stmttableaddkey"/> could be used to
+introduce one.  This caused a number of problems, and so this feature
+has been removed in version 2 of &slony1;.  It now
+<emphasis>requires</emphasis> that there is a suitable candidate
+primary key. </para>
+
+<para> The following SQL requests will establish a proper primary key on this table: </para>
+
+<programlisting>
+psql -U $PGBENCHUSER -h $HOST1 -d $MASTERDBNAME -c "begin; alter table
+history add column id serial; update history set id =
+nextval('history_id_seq'); alter table history add primary key(id);
+commit"
+</programlisting>
+
 <para>
   Puisque &slony1; dépend de la présence du langage procédural pl/pgSQL, nous
   devons l'installer maintenant. Il est possible que vous ayez installé
@@ -179,41 +203,24 @@
   principalement des procédures stockées sur les n&oelig;uds maître et esclaves.
 </para>
 
-<sect3><title>Utiliser les scripts altperl</title>
-<indexterm><primary>Utilisation des scripts altperl</primary></indexterm>
+<para> The example that follows uses <xref linkend="slonik"/> directly
+(or embedded directly into scripts).  This is not necessarily the most
+pleasant way to get started; there exist tools for building <xref
+linkend="slonik"/> scripts under the <filename>tools</filename>
+directory, including:</para>
+<itemizedlist>
+<listitem><para> <xref linkend="altperl"/> - a set of Perl scripts that
+build <xref linkend="slonik"/> scripts based on a single
+<filename>slon_tools.conf</filename> file. </para> </listitem>
 
-<para>
-  L'utilisation des scripts <xref linkend="altperl"/> est une façon simple de
-  faire ses premiers pas. Le script <command>slonik_build_env</command> génère
-  une sortie fournissant les détails nécessaires à la construction complète
-  d'un fichier <filename>rslon_tools.conf</filename>. Un exemple de fichier
-  <filename>slon_tools.conf</filename> est fournit dans la distribution afin
-  d'aider à la prise en main. Les script altperl font tous référence à ce
-  fichier central de configuration afin de simplifier l'administration. Une
-  fois le fichier slon_tools.conf créé, vous pouvez poursuivre comme ceci&nbsp;:
-</para>
+<listitem><para> <xref linkend="mkslonconf"/> - a shell script
+(<emphasis>e.g.</emphasis> - works with Bash) which, based either on
+self-contained configuration or on shell environment variables,
+generates a set of <xref linkend="slonik"/> scripts to configure a
+whole cluster. </para> </listitem>
 
-<programlisting># Initialisation du cluster:
-$ slonik_init_cluster  | slonik 
+</itemizedlist>
 
-# Démarrage de slon  (ici 1 et 2 sont les numéros de n&oelig;uds)
-$ slon_start 1    
-$ slon_start 2
-
-# Création des ensemble (ici 1 est le numéro de l'ensemble)
-$ slonik_create_set 1             
-
-# Abonner l'ensemble dans le second n&oelig;ud (1= n° d'ensemble, 2= n° de n&oelig;ud)
-$ slonik_subscribe_set  1 2 | slonik</programlisting>
-
-<para>
-  Vous avez répliqué votre première base de données. Vous pouvez sauter la
-  section suivante de la documentation si vous le souhaitez car il s'agit
-  d'une approche plus <quote>rustre</quote>.
-</para>
-
-</sect3>
-
 <sect3>
 <title>Utiliser directement les commandes slonik</title>
 
@@ -254,18 +261,6 @@
 	init cluster ( id=1, comment = 'Master Node');
  
 	#--
-	# Puisque la table history n'a pas de clé primaire, ni de contrainte
-	# unique qui pourrait être utilisée pour identifier une ligne, nous
-	# devons en ajouter une.
-	# La commande suivante ajoute à la table une colonne bigint nommée
-	# _Slony-I_$CLUSTERNAME_rowID. Elle comme valeur par défaut
-	# nextval('_$CLUSTERNAME.s1_rowid_seq'), et dispose des contraintes
-	# UNIQUE et NOT NULL. Toutes les lignes existantes seront initialisées
-	# avec un dentifiant.
-	#--
-	table add key (node id = 1, fully qualified name = 'public.history');
-
-	#--
 	# Slony-I regroupe les tables dans des ensembles.
 	# La plus petite unité qu'un noeud peut répliquer est un ensemble.
 	# Les commandes suivantes crées un ensemble contenant 4 tables pgbench.
@@ -275,14 +270,14 @@
 	set add table (set id=1, origin=1, id=1, fully qualified name = 'public.accounts', comment='accounts table');
 	set add table (set id=1, origin=1, id=2, fully qualified name = 'public.branches', comment='branches table');
 	set add table (set id=1, origin=1, id=3, fully qualified name = 'public.tellers', comment='tellers table');
-	set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table', key = serial);
+	set add table (set id=1, origin=1, id=4, fully qualified name = 'public.history', comment='history table');
 
 	#--
 	# Création du second noeud (l'esclave) 
 	# décrit comment les 2 noeuds vont se connecter l'un à l'autre
 	# et quelle manière ils vont écouter les événements..
 	#--
-	store node (id=2, comment = 'Slave node');
+	store node (id=2, comment = 'Slave node', event node=1);
 	store path (server = 1, client = 2, conninfo='dbname=$MASTERDBNAME host=$MASTERHOST user=$REPLICATIONUSER');
 	store path (server = 2, client = 1, conninfo='dbname=$SLAVEDBNAME host=$SLAVEHOST user=$REPLICATIONUSER');
 _EOF_
@@ -359,7 +354,7 @@
   Lorsque le processus de copie est terminé, le démon de réplication sur le
   n&oelig;ud <envar>$SLAVEHOST</envar> commencera à se synchroniser en
   appliquant les journaux de réplication qui auront été accumulés. Cela se
-  fera par petit à petit, par tranches de 10 secondes de travail applicatifs.
+  fera par petit à petit, par tranches d'environ 10 secondes de travail applicatifs.
   Selon les performances des deux systèmes impliqués, la taille des deux bases
   de données, la charge de transaction et la qualité de l'optimisation et de la
   maintenance effectuées sur les deux bases de données, ce processus de
@@ -368,6 +363,14 @@
 </para>
 
 <para>
+  If you encounter problems getting this working, check over the
+  logs for the &lslon; processes, as error messages are likely to be
+  suggestive of the nature of the problem.  The tool &lteststate; is
+  also useful for diagnosing problems with nearly-functioning
+  replication clusters.
+</para>
+
+<para>
   Vous avez maintenant configuré avec succès votre premier système de
   réplication maître-esclave basique, et les deux bases de données devraient,
   une fois que l'esclave sera synchronisé, contenir des données identiques. Ça,
@@ -428,6 +431,48 @@
   développeurs sur <ulink url="http://slony.info/">http://slony.info/</ulink>.
 </para>
 
+<para>
+.  Be sure to be prepared with useful
+diagnostic information including the logs generated by &lslon;
+processes and the output of &lteststate;. </para></sect3>
+
+<sect3><title>Using the altperl scripts</title>
+
+<indexterm><primary> altperl script example </primary></indexterm>
+
+<para>
+Using the <xref linkend="altperl"/> scripts is an alternative way to
+get started; it allows you to avoid writing slonik scripts, at least
+for some of the simple ways of configuring &slony1;.  The
+<command>slonik_build_env</command> script will generate output
+providing details you need to build a
+<filename>slon_tools.conf</filename>, which is required by these
+scripts.  An example <filename>slon_tools.conf</filename> is provided
+in the distribution to get you started.  The altperl scripts all
+reference this central configuration file centralize cluster
+configuration information. Once slon_tools.conf has been created, you
+can proceed as follows:
+</para>
+
+<programlisting>
+# Initialize cluster:
+$ slonik_init_cluster  | slonik 
+
+# Start slon  (here 1 and 2 are node numbers)
+$ slon_start 1    
+$ slon_start 2
+
+# Create Sets (here 1 is a set number)
+$ slonik_create_set 1 | slonik             
+
+# subscribe set to second node (1= set ID, 2= node ID)
+$ slonik_subscribe_set 1 2 | slonik
+</programlisting>
+
+<para> You have now replicated your first database.  You can skip the
+following section of documentation if you'd like, which documents more
+of a <quote>bare-metal</quote> approach.</para>
+
 </sect3>
 
 </sect2>

Modified: traduc/branches/slony_1_2/help.xml
===================================================================
--- traduc/branches/slony_1_2/help.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/help.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -23,8 +23,8 @@
       <para>
         Avant de soumettre des questions sur un forum public en demandant
         pourquoi <quote>quelque chose d'étrange</quote> s'est produit dans
-	votre cluster de réplication, veuillez lancer la commande
-	<xref linkend="testslonystate"/>.
+	votre cluster de réplication, be sure to run the &lteststate; tool and be
+        prepared to provide its output.
         Cela peut vous donner plus d'idées sur ce qui ne va pas, et les
 	résultats seront sûrement d'une grande aide dans l'analyse du
 	problème.

Modified: traduc/branches/slony_1_2/installation.xml
===================================================================
--- traduc/branches/slony_1_2/installation.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/installation.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -54,7 +54,7 @@
 <indexterm><primary>installation : version courte</primary></indexterm>
 
 <para>
-<screen>PGMAIN=/usr/local/pgsql746-freebsd-2005-04-01 \
+<screen>PGMAIN=/usr/local/pgsql839-freebsd-2008-09-03 \
 ./configure \
     --with-pgconfigdir=$PGMAIN/bin
 gmake all; gmake install
@@ -110,7 +110,7 @@
 </para>
 
 <para>
-  La version 8 de &postgres; installe les fichiers d'en-tête
+  Les versions 8.0 et ultérieures de &postgres; installent les fichiers d'en-tête
   <command>#include</command> par défaut. Avec les versions 7.4 et antérieures,
   vous devez vous assurer que la compilation inclut la commande <command>make
   install-all-headers</command>, sinon les en-têtes du serveur ne seront pas
@@ -200,8 +200,8 @@
 </para>
 
 <para>
-  Voici la liste des fichiers principaux installés dans l'instance
-  PostgreSQL&nbsp;:
+  Voici la liste des fichiers principaux installés dans l'instance &postgres;
+  pour les versions de &slony1; jusqu'à la 1.2.x&nbsp;:
 </para>
 
 <itemizedlist>
@@ -221,12 +221,13 @@
 
 <para>
   (Notez qu'au fur et à mesure des versions, la liste des fichiers spécifiques
-  à une version va s'agrandir...)
+  à une version a tendance à grossir...)
 </para>
 
 <para>
   Les fichiers <filename>.sql</filename> ne sont pas encore complètement
-  installés. Les versions 7.3, 7.4 et 8.0 des fichiers sont installés sur
+  installés. Les versions pour toutes les versions supportées de &postgres;
+  (<emphasis>c'est-à-dire</emphasis> 7.3, 7.4, 8.0) des fichiers sont installés sur
   chaque système, quelque soit la version de &postgres;. L'outil d'administration
   <xref linkend="slonik"/> effectue des substitutions d'espace de noms et de
   cluster dans ces fichiers, puis chargent les fichiers lors de la création d'un
@@ -242,6 +243,25 @@
   chargés à distance à partir des autres n&oelig;uds.).
 </para>
 
+<para> In &slony1; version 2.0, this changes:</para>
+<itemizedlist>
+<listitem><para><filename> $bindir/slon</filename></para></listitem>
+<listitem><para><filename> $bindir/slonik</filename></para></listitem>
+<listitem><para><filename> $libdir/slony1_funcs$(DLSUFFIX)</filename></para></listitem>
+<listitem><para><filename> $datadir/slony1_base.sql</filename></para></listitem>
+<listitem><para><filename> $datadir/slony1_funcs.sql</filename></para></listitem>
+</itemizedlist>
+
+<note> <para> Note the loss of <filename>xxid.so</filename> - the txid
+data type introduced in &postgres; 8.3 makes it
+obsolete. </para></note>
+
+<note> <para> &slony1; 2.0 gives up compatibility with versions of
+&postgres; prior to 8.3, and hence <quote>resets</quote> the
+version-specific base function handling.  There may be function files
+for version 8.3, 8.4, and such, as replication-relevant divergences of
+&postgres; functionality take place.  </para></note>
+
 </sect2>
 
 <sect2>
@@ -266,7 +286,8 @@
   ce bug mais il n'y a eu aucun progrès depuis. La seconde URL ci-dessous
   indique qu'il y a eu des tentatives de correction en élevant la valeur de
   NAMELEN dans une future version de Red Hat Enterprise Linux, mais cela n'est
-  pas le cas en 2005. Les distribution Fedora actuelles ont déjà corrigé ce
+  pas le cas if you are using an elder version where this
+will never be rectified. Les distribution Fedora actuelles ont déjà corrigé ce
   problème.
 </para>
 

Modified: traduc/branches/slony_1_2/intro.xml
===================================================================
--- traduc/branches/slony_1_2/intro.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/intro.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -424,7 +424,7 @@
         Chaque événement SYNC appliqué doit être annoncé à tous les n&oelig;uds
 	participants à la réplication de l'ensemble de données, afin que chaque
 	n&oelig;ud sache qu'il est possible de purger les données des tables
-	<xref linkend="table.sl-log-1"/> et <xref linkend="table.sl-log-2"/>,
+	&sllog1; et &sllog2;,
 	car n'importe quel n&oelig;ud <quote>fournisseur</quote> peut
 	potentiellement devenir un <quote>maître</quote> à tout moment. On peut
 	s'attendre à que les messages SYNC ne soient propagés que sur n/2

Modified: traduc/branches/slony_1_2/legal.xml
===================================================================
--- traduc/branches/slony_1_2/legal.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/legal.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -5,7 +5,7 @@
      révision $Revision$ -->
 
 <copyright>
- <year>2004-2006</year>
+ <year>2004-2007</year>
  <holder>The PostgreSQL Global Development Group</holder>
 </copyright>
 
@@ -13,7 +13,7 @@
  <title>Notice légale</title>
 
  <para>
-  <productname>PostgreSQL</productname> est sous le Copyright &amp;copy; 2004-2006
+  <productname>PostgreSQL</productname> est sous le Copyright &amp;copy; 2004-2007
   du PostgreSQL Global Development Group et est distribué sous les termes
   de la licence de l'Université de Californie ci-dessous.
  </para>

Modified: traduc/branches/slony_1_2/listenpaths.xml
===================================================================
--- traduc/branches/slony_1_2/listenpaths.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/listenpaths.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -34,8 +34,8 @@
   <quote>parent</quote> qui leur transmet les mise à jours mais, en réalité,
   ils doivent pouvoir recevoir des messages de la part de
   <emphasis>tous</emphasis> les n&oelig;uds afin de pouvoir déterminer si les
-  <command>SYNC</command>s ont été reçues partout et que les entrées de <xref
-  linkend="table.sl-log-1"/> et <xref linkend="table.sl-log-2"/> ont été
+  <command>SYNC</command>s ont été reçues partout et que les entrées de
+  &sllog1; et &sllog2; ont été
   appliquées partout et qu'elles peuvent être purgées. Ces communications
   supplémentaires permettent à <productname>Slony-I</productname> de déplacer
   les origines vers d'autres n&oelig;uds.

Modified: traduc/branches/slony_1_2/locking.xml
===================================================================
--- traduc/branches/slony_1_2/locking.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/locking.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -21,6 +21,10 @@
   dans le sens où les <quote>vieilles lectures</quote> peuvent accéder aux
   <quote>anciennes lignes</quote>. La plupart du temps, cela évite aux aimables
   utilisateurs de &postgres; de trop se préoccuper des verrous.
+  &slony1; configuration events normally grab locks on an
+  internal table, <envar>sl_config_lock</envar>, which should not be
+  visible to applications unless they are performing actions on &slony1;
+  components.
 </para>
 
 <para>

Modified: traduc/branches/slony_1_2/loganalysis.xml
===================================================================
--- traduc/branches/slony_1_2/loganalysis.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/loganalysis.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -34,9 +34,24 @@
 
 </sect2>
 
+<sect2><title>INFO notices</title>
+
+<para> Events that take place that seem like they will generally be of
+interest are recorded at the INFO level, and, just as with CONFIG
+notices, are always listed. </para>
+
+</sect2>
+
 <sect2>
 <title>Notifications DEBUG</title>
 
+<para>Debug notices are of less interest, and will quite likely only
+need to be shown if you are running into some problem with &slony1;.</para>
+
+</sect2>
+
+<sect2><title>Thread name </title>
+
 <para>
   Les notifications DEBUG sont moins intéressantes et ne vous seront utiles
   que lorsque vous rencontrez une problème avec &slony1;.
@@ -112,6 +127,12 @@
   4 affichera des plus en plus de messages de niveau DEBUG.
 </para>
 
+<para> How much information they display is controlled by the
+<envar>log_level</envar> &lslon; parameter; ERROR/WARN/CONFIG/INFO
+messages will always be displayed, while choosing increasing values
+from 1 to 4 will lead to additional DEBUG level messages being
+displayed. </para>
+
 </sect2>
 
 <sect2>
@@ -2345,7 +2366,7 @@
     </para>
 
     <para>
-      Ceci ce produit actuellement (2007) car la désactivation d'un
+      Ceci ne devrait plus se produire maintenant (2007) car la désactivation d'un
       n&oelig;ud n'est pas une fonctionnalité supportée.
     </para>
   </listitem>
@@ -2372,6 +2393,22 @@
       <command>STORE_NODE</command> ne se propagent pas.
     </para>
   </listitem>
+  
+  <listitem>
+    <para>
+      <command>insert or update on table "sl_path" violates foreign key
+      constraint "pa_client-no_id-ref".  DETAIL: Key (pa_client)=(2) is
+      not present on table "s1_node</command>
+    </para>
+
+    <para>
+      This happens if you try to do <xref linkend="stmtsubscribeset"/>
+      when the node unaware of a would-be new node; probably a sign of
+      <command>STORE_NODE</command> and <command>STORE_PATH</command>
+      requests not propagating...
+    </para>
+  </listitem>
+
 </itemizedlist>
 
 </sect3>

Modified: traduc/branches/slony_1_2/logshipping.xml
===================================================================
--- traduc/branches/slony_1_2/logshipping.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/logshipping.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -432,7 +432,7 @@
 -- Node 11, Event 656
 start transaction;
 
-select "_T1".setsyncTracking_offline(1, '655', '656', '2005-09-23 18:37:40.206342');
+select "_T1".setsyncTracking_offline(1, '655', '656', '2007-09-23 18:37:40.206342');
 -- end of log archiving header
       </programlisting>
     </para>
@@ -447,7 +447,7 @@
 -- Node 11, Event 109
 start transaction;
 
-select "_T1".setsyncTracking_offline(1, '96', '109', '2005-09-23 19:01:31.267403');
+select "_T1".setsyncTracking_offline(1, '96', '109', '2007-09-23 19:01:31.267403');
 -- end of log archiving header</programlisting>
     </para>
 
@@ -524,6 +524,29 @@
 
 </sect2>
 
+<sect2><title> <application> find-triggers-to-deactivate.sh
+</application> </title>
+
+<indexterm><primary> trigger deactivation </primary> </indexterm>
+
+<para> It was once pointed out (<ulink
+url="http://www.slony.info/bugzilla/show_bug.cgi?id=19"> Bugzilla bug
+#19</ulink>) that the dump of a schema may include triggers and rules
+that you may not wish to have running on the log shipped node.</para>
+
+<para> The tool <filename> tools/find-triggers-to-deactivate.sh
+</filename> was created to assist with this task.  It may be run
+against the node that is to be used as a schema source, and it will
+list the rules and triggers present on that node that may, in turn
+need to be deactivated.</para>
+
+<para> It includes <function>logtrigger</function> and <function>denyaccess</function>
+triggers which will may be left out of the extracted schema, but it is
+still worth the Gentle Administrator verifying that such triggers are
+kept out of the log shipped replica.</para>
+
+</sect2>
+
 <sect2>
 <title>L'outil <application>slony_logshipper</application></title>
 
@@ -657,9 +680,7 @@
       erreur est rencontrée.
     </para>
   </listitem>
-</itemizedlist>
 
-<itemizedlist>
   <listitem>
     <para>Noms des fichiers d'archive</para>
 

Modified: traduc/branches/slony_1_2/maintenance.xml
===================================================================
--- traduc/branches/slony_1_2/maintenance.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/maintenance.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -16,9 +16,8 @@
     <listitem>
       <para>
         supprime les anciennes données sur les différentes tables du schéma
-        de <productname>Slony-I</productname>, notamment <xref
-	linkend="table.sl-log-1"/>, <xref linkend="table.sl-log-2"/> et <xref
-        linkend="table.sl-seqlog"/>&nbsp;;
+        de <productname>Slony-I</productname>, notamment &sllog1;, &sllog2;
+	et &slseqlog;&nbsp;;
       </para>
     </listitem>
 
@@ -54,22 +53,18 @@
     <listitem>
       <para>
         Le bogue <link linkend="dupkey">violation par clef dupliquée</link> a
-	permis d'isoler des situations de concurrence dans &postgres;. Des
-	problèmes subsistent notamment lorsque <command>VACUUM</command> ne
-        réclame pas correctement l'espace menant à une corruption des index de
-	type B-tree.
+	permis d'isoler a number of rather obscure
+        &postgres; race conditions, so that in modern versions of &slony1; and
+	&postgres;, there should be little to worry about.
       </para>
-
-      <para>
-        Il peut être utile de lancer la commande <command>REINDEX TABLE
-	sl_log_1;</command> périodiquement pour éviter ce problème.
-      </para>
     </listitem>
 
     <listitem>
       <para>
         À partir de la version 1.2, la fonctionnalité <quote>log
-	switching</quote> est arrivée&nbsp;;de temps en temps, elle tente
+	switching</quote> est arrivée&nbsp;;de temps en temps (by default, once per week,
+        though you may induce it by calling the stored
+        function <function>logswitch_start()</function>), elle tente
 	d'interchanger les données entre &sllog1; et &sllog2; afin de réaliser
 	un <command>TRUNCATE</command> sur les <quote>plus vieilles</quote>
 	données.
@@ -80,10 +75,70 @@
 	nettoyées ce qui évite qu'elles ne grossissent trop lors d'une charge
 	importante et qu'elles deviennent impossibles à nettoyer.
       </para>
+      
+<para> In version 2.0, <command>DELETE</command> is no longer used to
+clear out data in &sllog1; and &sllog2;; instead, the log switch logic
+is induced frequently, every time the cleanup loop does not find a
+switch in progress, and these tables are purely cleared out
+via <command>TRUNCATE</command>.  This eliminates the need to vacuum
+these tables. </para>
+
     </listitem>
   </itemizedlist>
 </para>
 
+<sect2 id="maintenance-autovac"> <title> Interaction with &postgres;
+autovacuum </title>
+
+<indexterm><primary>autovacuum interaction</primary></indexterm>
+
+<para> Recent versions of &postgres; support an
+<quote>autovacuum</quote> process which notices when tables are
+modified, thereby creating dead tuples, and vacuums those tables,
+<quote>on demand.</quote> It has been observed that this can interact
+somewhat negatively with &slony1;'s own vacuuming policies on its own
+tables. </para>
+
+<para> &slony1; requests vacuums on its tables immediately after
+completing transactions that are expected to clean out old data, which
+may be expected to be the ideal time to do so.  It appears as though
+autovacuum may notice the changes a bit earlier, and attempts
+vacuuming when transactions are not complete, rendering the work
+pretty useless.  It seems preferable to configure autovacuum to avoid
+vacuum &slony1;-managed configuration tables. </para>
+
+<para> The following query (change the cluster name to match your
+local configuration) will identify the tables that autovacuum should
+be configured not to process: </para>
+
+<programlisting>
+mycluster=# select oid, relname from pg_class where relnamespace = (select oid from pg_namespace where nspname = '_' || 'MyCluster') and relhasindex;
+  oid  |   relname    
+-------+--------------
+ 17946 | sl_nodelock
+ 17963 | sl_setsync
+ 17994 | sl_trigger
+ 17980 | sl_table
+ 18003 | sl_sequence
+ 17937 | sl_node
+ 18034 | sl_listen
+ 18017 | sl_path
+ 18048 | sl_subscribe
+ 17951 | sl_set
+ 18062 | sl_event
+ 18069 | sl_confirm
+ 18074 | sl_seqlog
+ 18078 | sl_log_1
+ 18085 | sl_log_2
+(15 rows)
+</programlisting>
+
+<para> The following query will populate
+<envar>pg_catalog.pg_autovacuum</envar> with suitable configuration
+information: <command> INSERT INTO pg_catalog.pg_autovacuum (vacrelid, enabled, vac_base_thresh, vac_scale_factor, anl_base_thresh, anl_scale_factor, vac_cost_delay, vac_cost_limit, freeze_min_age, freeze_max_age) SELECT oid, 'f', -1, -1, -1, -1, -1, -1, -1, -1 FROM pg_catalog.pg_class WHERE relnamespace = (SELECT OID FROM pg_namespace WHERE nspname = '_' || 'MyCluster') AND relhasindex; </command>
+</para>
+</sect2>
+
 <sect2><title>Chiens de garde&nbsp;: garder les Slons en vie</title>
 <indexterm><primary>Chiens de garde pour garder en vie les démons slon</primary></indexterm>
 
@@ -121,6 +176,7 @@
 <sect2 id="gensync"><title>En parallèle aux chiens de garde&nbsp;:
 generate_syncs.sh</title>
 
+<indexterm><primary>generate SYNCs</primary></indexterm>
 <para>
   Un nouveau script est apparu dans &slony1; 1.1, il s'agit de
   <application>generate_syncs.sh</application>, qui est utilise dans les
@@ -166,8 +222,8 @@
 <indexterm><primary>tester le statut du cluster</primary></indexterm>
 
 <para>
-  Dans le répertoire <filename>tools</filename>, vous trouverez des scripts
-  nommés <filename>test_slony_state.pl</filename> et
+  Dans le répertoire <filename>tools</filename>, vous trouverez les scripts
+  &lteststate; nommés <filename>test_slony_state.pl</filename> et
   <filename>test_slony_state-dbi.pl</filename>. Le premier utilise l'interface
   Perl/DBI, l'autre utilise l'interface PostgreSQL.
 </para>
@@ -313,6 +369,8 @@
 
 <sect2><title>Autres tests de réplication</title>
 
+<indexterm><primary>testing replication</primary></indexterm>
+
 <para>
   La méthodologie de la section précédente est conçu avec un vue pour minimiser
   le coût des requêtes de tests&nbsp;; sur un cluster très chargé, supportant
@@ -411,4 +469,101 @@
 
 </sect2>
 
+<sect2><title>mkservice </title>
+<indexterm><primary>mkservice for BSD </primary></indexterm>
+
+<sect3><title>slon-mkservice.sh</title>
+
+<para> Create a slon service directory for use with svscan from
+daemontools.  This uses multilog in a pretty basic way, which seems to
+be standard for daemontools / multilog setups. If you want clever
+logging, see logrep below. Currently this script has very limited
+error handling capabilities.</para>
+
+<para> For non-interactive use, set the following environment
+variables.  <envar>BASEDIR</envar> <envar>SYSUSR</envar>
+<envar>PASSFILE</envar> <envar>DBUSER</envar> <envar>HOST</envar>
+<envar>PORT</envar> <envar>DATABASE</envar> <envar>CLUSTER</envar>
+<envar>SLON_BINARY</envar> If any of the above are not set, the script
+asks for configuration information interactively.</para>
+
+<itemizedlist>
+<listitem><para>
+<envar>BASEDIR</envar> where you want the service directory structure for the slon
+to be created. This should <emphasis>not</emphasis> be the <filename>/var/service</filename> directory.</para></listitem>
+<listitem><para>
+<envar>SYSUSR</envar> the unix user under which the slon (and multilog) process should run.</para></listitem>
+<listitem><para>
+<envar>PASSFILE</envar> location of the <filename>.pgpass</filename> file to be used. (default <filename>~sysusr/.pgpass</filename>)</para></listitem>
+<listitem><para>
+<envar>DBUSER</envar> the postgres user the slon should connect as (default slony)</para></listitem>
+<listitem><para>
+<envar>HOST</envar> what database server to connect to (default localhost)</para></listitem>
+<listitem><para>
+<envar>PORT</envar> what port to connect to (default 5432)</para></listitem>
+<listitem><para>
+<envar>DATABASE</envar> which database to connect to (default dbuser)</para></listitem>
+<listitem><para>
+<envar>CLUSTER</envar> the name of your Slony1 cluster? (default database)</para></listitem>
+<listitem><para>
+<envar>SLON_BINARY</envar> the full path name of the slon binary (default <command>which slon</command>)</para></listitem>
+</itemizedlist>
+</sect3>
+
+<sect3><title>logrep-mkservice.sh</title>
+
+<para>This uses <command>tail -F</command> to pull data from log files allowing
+you to use multilog filters (by setting the CRITERIA) to create
+special purpose log files. The goal is to provide a way to monitor log
+files in near realtime for <quote>interesting</quote> data without either
+hacking up the initial log file or wasting CPU/IO by re-scanning the
+same log repeatedly.
+</para>
+
+<para>For non-interactive use, set the following environment
+variables.  <envar>BASEDIR</envar> <envar>SYSUSR</envar> <envar>SOURCE</envar>
+<envar>EXTENSION</envar> <envar>CRITERIA</envar> If any of the above are not set,
+the script asks for configuration information interactively.
+</para>
+
+<itemizedlist>
+<listitem><para>
+<envar>BASEDIR</envar> where you want the service directory structure for the logrep
+to be created. This should <emphasis>not</emphasis> be the <filename>/var/service</filename> directory.</para></listitem>
+<listitem><para><envar>SYSUSR</envar> unix user under which the service should run.</para></listitem>
+<listitem><para><envar>SOURCE</envar> name of the service with the log you want to follow.</para></listitem>
+<listitem><para><envar>EXTENSION</envar> a tag to differentiate this logrep from others using the same source.</para></listitem>
+<listitem><para><envar>CRITERIA</envar> the multilog filter you want to use.</para></listitem>
+</itemizedlist>
+
+<para> A trivial example of this would be to provide a log file of all slon
+ERROR messages which could be used to trigger a nagios alarm.
+<command>EXTENSION='ERRORS'</command>
+<command>CRITERIA="'-*' '+* * ERROR*'"</command>
+(Reset the monitor by rotating the log using <command>svc -a $svc_dir</command>)
+</para>
+
+<para> A more interesting application is a subscription progress log.
+<command>EXTENSION='COPY'</command>
+<command>CRITERIA="'-*' '+* * ERROR*' '+* * WARN*' '+* * CONFIG enableSubscription*' '+* * DEBUG2 remoteWorkerThread_* prepare to copy table*' '+* * DEBUG2 remoteWorkerThread_* all tables for set * found on subscriber*' '+* * DEBUG2 remoteWorkerThread_* copy*' '+* * DEBUG2 remoteWorkerThread_* Begin COPY of table*' '+* * DEBUG2 remoteWorkerThread_* * bytes copied for table*' '+* * DEBUG2 remoteWorkerThread_* * seconds to*' '+* * DEBUG2 remoteWorkerThread_* set last_value of sequence*' '+* * DEBUG2 remoteWorkerThread_* copy_set*'"</command>
+</para>
+
+<para>If you have a subscription log then it's easy to determine if a given
+slon is in the process of handling copies or other subscription activity.
+If the log isn't empty, and doesn't end with a 
+<command>"CONFIG enableSubscription: sub_set:1"</command>
+(or whatever set number you've subscribed) then the slon is currently in
+the middle of initial copies.</para>
+
+<para> If you happen to be monitoring the mtime of your primary slony logs to 
+determine if your slon has gone brain-dead, checking this is a good way
+to avoid mistakenly clobbering it in the middle of a subscribe. As a bonus,
+recall that since the the slons are running under svscan, you only need to
+kill it (via the svc interface) and let svscan start it up again laster.
+I've also found the COPY logs handy for following subscribe activity 
+interactively.</para>
+</sect3>
+
+</sect2>
+
 </sect1>

Modified: traduc/branches/slony_1_2/monitoring.xml
===================================================================
--- traduc/branches/slony_1_2/monitoring.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/monitoring.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -8,6 +8,166 @@
 <title>Surveillance</title>
 <indexterm><primary>Surveiller &slony1;</primary></indexterm>
 
+<para> As a prelude to the discussion, it is worth pointing out that
+since the bulk of &slony1; functionality is implemented via running
+database functions and SQL queries against tables within a &slony1;
+schema, most of the things that one might want to monitor about
+replication may be found by querying tables in the schema created for
+the cluster in each database in the cluster. </para>
+
+<para> Here are some of the tables that contain information likely to
+be particularly interesting from a monitoring and diagnostic
+perspective.</para>
+
+<glosslist>
+<glossentry><glossterm><envar>sl_status</envar></glossterm>
+
+<glossdef><para>This view is the first, most obviously useful thing to
+look at from a monitoring perspective.  It looks at the local node's
+events, and checks to see how quickly they are being confirmed on
+other nodes.</para>
+
+<para> The view is primarily useful to run against an origin
+(<quote>master</quote>) node, as it is only there where the events
+generated are generally expected to require interesting work to be
+done.  The events generated on non-origin nodes tend to
+be <command>SYNC</command> events that require no replication work be
+done, and that are nearly no-ops, as a
+result. </para></glossdef></glossentry>
+
+<glossentry><glossterm>&slconfirm;</glossterm>
+
+<glossdef><para>Contains confirmations of replication events; this may be used to infer which events have, and <emphasis>have not</emphasis> been processed.</para></glossdef></glossentry>
+
+<glossentry><glossterm>&slevent;</glossterm>
+<glossdef><para>Contains information about the replication events processed on the local node.  </para></glossdef></glossentry>
+
+<glossentry><glossterm>
+&sllog1;
+and
+&sllog2;
+</glossterm>
+
+<glossdef><para>These tables contain replicable data.  On an origin node, this is the <quote>queue</quote> of data that has not necessarily been replicated everywhere.  By examining the table, you may examine the details of what data is replicable. </para></glossdef></glossentry>
+
+<glossentry><glossterm>&slnode;</glossterm>
+<glossdef><para>The list of nodes in the cluster.</para></glossdef></glossentry>
+
+<glossentry><glossterm>&slpath;</glossterm>
+<glossdef><para>This table holds connection information indicating how &lslon; processes are to connect to remote nodes, whether to access events, or to request replication data. </para></glossdef></glossentry>
+
+<glossentry><glossterm>&sllisten;</glossterm>
+
+<glossdef><para>This configuration table indicates how nodes listen
+for events coming from other nodes.  Usually this is automatically
+populated; generally you can detect configuration problems by this
+table being <quote>underpopulated.</quote> </para></glossdef></glossentry>
+
+<glossentry><glossterm>&slregistry;</glossterm>
+
+<glossdef><para>A configuration table that may be used to store
+miscellaneous runtime data.  Presently used only to manage switching
+between the two log tables.  </para></glossdef></glossentry>
+
+<glossentry><glossterm>&slseqlog;</glossterm>
+
+<glossdef><para>Contains the <quote>last value</quote> of replicated
+sequences.</para></glossdef></glossentry>
+
+<glossentry><glossterm>&slset;</glossterm>
+
+<glossdef><para>Contains definition information for replication sets,
+which is the mechanism used to group together related replicable
+tables and sequences.</para></glossdef></glossentry>
+
+<glossentry><glossterm>&slsetsync;</glossterm>
+<glossdef><para>Contains information about the state of synchronization of each replication set, including transaction snapshot data.</para></glossdef></glossentry>
+
+<glossentry><glossterm>&slsubscribe;</glossterm>
+<glossdef><para>Indicates what subscriptions are in effect for each replication set.</para></glossdef></glossentry>
+
+<glossentry><glossterm>&sltable;</glossterm>
+<glossdef><para>Contains the list of tables being replicated.</para></glossdef></glossentry>
+
+</glosslist>
+
+<sect2 id="testslonystate"> <title> test_slony_state</title>
+
+<indexterm><primary>script test_slony_state to test replication state</primary></indexterm>
+
+<para> This invaluable script does various sorts of analysis of the
+state of a &slony1; cluster.  &slony1; <xref linkend="bestpractices"/>
+recommend running these scripts frequently (hourly seems suitable) to
+find problems as early as possible.  </para>
+
+<para> You specify arguments including <option>database</option>,
+<option>host</option>, <option>user</option>,
+<option>cluster</option>, <option>password</option>, and
+<option>port</option> to connect to any of the nodes on a cluster.
+You also specify a <option>mailprog</option> command (which should be
+a program equivalent to <productname>Unix</productname>
+<application>mailx</application>) and a recipient of email. </para>
+
+<para> You may alternatively specify database connection parameters
+via the environment variables used by
+<application>libpq</application>, <emphasis>e.g.</emphasis> - using
+<envar>PGPORT</envar>, <envar>PGDATABASE</envar>,
+<envar>PGUSER</envar>, <envar>PGSERVICE</envar>, and such.</para>
+
+<para> The script then rummages through <xref linkend="table.sl-path"/>
+to find all of the nodes in the cluster, and the DSNs to allow it to,
+in turn, connect to each of them.</para>
+
+<para> For each node, the script examines the state of things,
+including such things as:
+
+<itemizedlist>
+<listitem><para> Checking <xref linkend="table.sl-listen"/> for some
+<quote>analytically determinable</quote> problems.  It lists paths
+that are not covered.</para></listitem>
+
+<listitem><para> Providing a summary of events by origin node</para>
+
+<para> If a node hasn't submitted any events in a while, that likely
+suggests a problem.</para></listitem>
+
+<listitem><para> Summarizes the <quote>aging</quote> of table <xref
+linkend="table.sl-confirm"/> </para>
+
+<para> If one or another of the nodes in the cluster hasn't reported
+back recently, that tends to lead to cleanups of tables like &sllog1;,
+&sllog2; and &slseqlog; not taking place.</para></listitem>
+
+<listitem><para> Summarizes what transactions have been running for a
+long time</para>
+
+<para> This only works properly if the statistics collector is
+configured to collect command strings, as controlled by the option
+<option> stats_command_string = true </option> in <filename>
+postgresql.conf </filename>.</para>
+
+<para> If you have broken applications that hold connections open,
+this will find them.</para>
+
+<para> If you have broken applications that hold connections open,
+that has several unsalutory effects as <link
+linkend="longtxnsareevil"> described in the
+FAQ</link>.</para></listitem>
+
+</itemizedlist></para>
+
+<para> The script does some diagnosis work based on parameters in the
+script; if you don't like the values, pick your favorites!</para>
+
+<note><para> Note that there are two versions, one using the
+<quote>classic</quote> <filename>Pg.pm</filename> Perl module for
+accessing &postgres; databases, and one, with <filename>dbi</filename>
+in its name, that uses the newer Perl <function> DBI</function>
+interface.  It is likely going to be easier to find packaging for
+<function>DBI</function>. </para> </note>
+
+</sect2>
+
 <sect2>
 <title>Tester la replication avec &nagios;</title>
 <indexterm><primary>&nagios; pour surveiller la réplication</primary></indexterm>
@@ -316,7 +476,13 @@
   Le script <filename>mkmediawiki.pl </filename>, situé dans
   <filename>tools</filename>, peut être utilisé pour générer un rapport de
   surveillance du cluster compatible avec le populaire logiciel <ulink
-  url="http://www.mediawiki.org/">MediaWiki</ulink>.
+  url="http://www.mediawiki.org/">MediaWiki</ulink>. Note that the
+  <option>--categories</option> permits the user to specify a set of
+  (comma-delimited) categories with which to associate the output.  If
+  you have a series of &slony1; clusters, passing in the option
+  <option>--categories=slony1</option> leads to the MediaWiki instance
+  generating a category page listing all &slony1; clusters so
+  categorized on the wiki.
 </para>
 
 <para>
@@ -326,7 +492,7 @@
 <screen>
 ~/logtail.en>         mvs login -d mywiki.example.info -u "Chris Browne" -p `cat ~/.wikipass` -w wiki/index.php                     
 Doing login with host: logtail and lang: en
-~/logtail.en> perl $SLONYHOME/tools/mkmediawiki.pl --host localhost --database slonyregress1 --cluster slony_regress1 > Slony_replication.wiki
+~/logtail.en> perl $SLONYHOME/tools/mkmediawiki.pl --host localhost --database slonyregress1 --cluster slony_regress1 --categories=Slony-I > Slony_replication.wiki
 ~/logtail.en> mvs commit -m "More sophisticated generated Slony-I cluster docs" Slony_replication.wiki
 Doing commit Slony_replication.wiki with host: logtail and lang: en
 </screen>
@@ -341,4 +507,81 @@
 
 </sect2>
 
+<sect2>
+<title> Analysis of a SYNC </title>
+
+<para> The following is (as of 2.0) an extract from the &lslon; log for node
+#2 in a run of <quote>test1</quote> from the <xref linkend="testbed"/>. </para>
+
+<screen>
+DEBUG2 remoteWorkerThread_1: SYNC 19 processing
+INFO   about to monitor_subscriber_query - pulling big actionid list 134885072
+INFO   remoteWorkerThread_1: syncing set 1 with 4 table(s) from provider 1
+DEBUG2  ssy_action_list length: 0
+DEBUG2 remoteWorkerThread_1: current local log_status is 0
+DEBUG2 remoteWorkerThread_1_1: current remote log_status = 0
+DEBUG1 remoteHelperThread_1_1: 0.028 seconds delay for first row
+DEBUG1 remoteHelperThread_1_1: 0.978 seconds until close cursor
+INFO   remoteHelperThread_1_1: inserts=144 updates=1084 deletes=0
+INFO   remoteWorkerThread_1: sync_helper timing:  pqexec (s/count)- provider 0.063/6 - subscriber 0.000/6
+INFO   remoteWorkerThread_1: sync_helper timing:  large tuples 0.315/288
+DEBUG2 remoteWorkerThread_1: cleanup
+INFO   remoteWorkerThread_1: SYNC 19 done in 1.272 seconds
+INFO   remoteWorkerThread_1: SYNC 19 sync_event timing:  pqexec (s/count)- provider 0.001/1 - subscriber 0.004/1 - IUD 0.972/248
+</screen>
+
+<para> Here are some notes to interpret this output: </para>
+
+<itemizedlist>
+<listitem><para> Note the line that indicates <screen>inserts=144 updates=1084 deletes=0</screen> </para> 
+<para> This indicates how many tuples were affected by this particular SYNC. </para></listitem>
+<listitem><para> Note the line indicating <screen>0.028 seconds delay for first row</screen></para> 
+
+<para> This indicates the time it took for the <screen>LOG
+cursor</screen> to get to the point of processing the first row of
+data.  Normally, this takes a long time if the SYNC is a large one,
+and one requiring sorting of a sizable result set.</para></listitem>
+
+<listitem><para> Note the line indicating <screen>0.978 seconds until
+close cursor</screen></para> 
+
+<para> This indicates how long processing took against the
+provider.</para></listitem>
+
+<listitem><para> sync_helper timing:  large tuples 0.315/288 </para> 
+
+<para> This breaks off, as a separate item, the number of large tuples
+(<emphasis>e.g.</emphasis> - where size exceeded the configuration
+parameter <xref linkend="slon-config-max-rowsize"/>) and where the
+tuples had to be processed individually. </para></listitem>
+
+<listitem><para> <screen>SYNC 19 done in 1.272 seconds</screen></para> 
+
+<para> This indicates that it took 1.272 seconds, in total, to process
+this set of SYNCs. </para>
+</listitem>
+
+<listitem><para> <screen>SYNC 19 sync_event timing:  pqexec (s/count)- provider 0.001/1 - subscriber 0.004/0 - IUD 0.972/248</screen></para> 
+
+<para> This records information about how many queries were issued
+against providers and subscribers in function
+<function>sync_event()</function>, and how long they took. </para>
+
+<para> Note that 248 does not match against the numbers of inserts,
+updates, and deletes, described earlier, as I/U/D requests are
+clustered into groups of queries that are submitted via a single
+<function>pqexec()</function> call on the
+subscriber. </para></listitem>
+
+<listitem><para> <screen>sync_helper timing:  pqexec (s/count)- provider 0.063/6 - subscriber 0.000/6</screen></para>
+
+<para> This records information about how many queries were issued
+against providers and subscribers in function
+<function>sync_helper()</function>, and how long they took.
+</para></listitem>
+
+</itemizedlist>
+
+</sect2>
+
 </sect1>

Modified: traduc/branches/slony_1_2/partitioning.xml
===================================================================
--- traduc/branches/slony_1_2/partitioning.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/partitioning.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -152,8 +152,7 @@
 </itemizedlist>
 
 <para>
-  Il existe plusieurs fonctions qui prennent en charge cela, pour les versions
-  8.1 et ultérieures de &postgres;. L'utilisateur
+  Il existe plusieurs fonctions qui prennent en charge cela. L'utilisateur
   peut utiliser celle qu'il préfère. La <quote>fonction de base</quote> est
   <function>add_empty_table_to_replication()</function>, les autres disposent
   d'arguments supplémentaires ou différents.

Modified: traduc/branches/slony_1_2/prerequisites.xml
===================================================================
--- traduc/branches/slony_1_2/prerequisites.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/prerequisites.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -15,12 +15,13 @@
 <indexterm><primary>plates-formes sur lesquelles &slony1; fonctionne</primary></indexterm>
 
 <para>
-  Les plates-formes ayant été testées spécifiquement à ce jour pour cette
-  version sont FreeBSD-4X-i368, FreeBSD-5X-i386,FreeBSD-5X-alpha, OS-X-10.3,
-  Linux-2.4X-i386 Linux-2.6X-i386, Linux-2.6X-amd64,
-  <trademark>Solaris</trademark>-2.8-SPARC,
-  <trademark>Solaris</trademark>-2.9-SPARC, AIX 5.1, OpenBSD-3.5-sparc64 et
-  &windows; 2000, XP et 2003 (32 bit).
+  Les plates-formes ayant été testées spécifiquement sont FreeBSD-4X-i368,
+  FreeBSD-5X-i386, FreeBSD-5X-alpha, OS-X-10.3, Linux-2.4X-i386, Linux-2.6X-i386,
+  Linux-2.6X-amd64, <trademark>Solaris</trademark>-2.8-SPARC,
+  <trademark>Solaris</trademark>-2.9-SPARC, AIX 5.1 et 5.3, OpenBSD-3.5-sparc64
+  et &windows; 2000, XP et 2003 (32 bit). There
+  is enough diversity amongst these platforms that nothing ought to
+  prevent running &slony1; on other similar platforms.
 </para>
 
 <sect2>
@@ -95,6 +96,12 @@
 	<xref linkend="faq" />, <link linkend="pg81funs">&postgres;
 	8.1.[0-3]</link>.
       </para>
+
+      <para>
+        There is variation between what versions of &postgres; are
+        compatible with what versions of &slony1;.  See <xref
+        linkend="installation"/> for more details.
+      </para>
     </listitem>
 
     <listitem>
@@ -147,7 +154,7 @@
 
 <note>
   <para>
-    Dans la version 1.1 de &slony1;, il est possible de compiler &slony1;
+    À partir de la version 1.1 de &slony1;, il est possible de compiler &slony1;
     séparemment de &postgres;, rendant libres les distributions
     <productname>Linux</productname> et
     <productname>FreeBSD</productname> d'inclure des packages binaires

Modified: traduc/branches/slony_1_2/releasechecklist.xml
===================================================================
--- traduc/branches/slony_1_2/releasechecklist.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/releasechecklist.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -103,6 +103,9 @@
       Purgez le répertoire <filename>autom4te.cache</filename> afin qu'il ne
       soit pas inclus dans la compilation.
     </para>
+    <para>
+      Does not need to be done by hand - the later <command> make distclean </command> step does this for you.
+    </para>
   </listitem> 
 
   <listitem>
@@ -110,6 +113,9 @@
       Purgez les fichiers .cvsignore&nbsp;; cela peut se faire avec la commande
       <command>find . -name .cvsignore | xargs rm</command>
     </para>
+    <para>
+      Does not need to be done by hand - the later <command> make distclean </command> step does this for you.
+    </para>
   </listitem>
 
   <listitem>
@@ -135,7 +141,7 @@
   </listitem>
 
   <listitem>
-    <para>PACKAGE_STRING=postgresql-slony1-engine REL_1_1_2</para>
+    <para>PACKAGE_STRING=slony1-engine REL_1_1_2</para>
   </listitem>
 
 </itemizedlist>
@@ -184,6 +190,11 @@
       <command> ./configure &amp;&amp; make all &amp;&amp; make clean</command>
       mais c'est une approche quelque peu disgracieuse.
     </para>
+
+    <para>
+      Slightly better may be
+      <command>./configure &amp;&amp; make src/slon/conf-file.c src/slonik/parser.c src/slonik/scan.c</command>
+    </para>
   </listitem> 
 
   <listitem>
@@ -195,6 +206,13 @@
     <para>
       <command>make distclean</command> le fera pour vous...
     </para>
+
+    <para>
+      Note that <command>make distclean</command> also clears out
+      <filename>.cvsignore</filename> files and
+      <filename>autom4te.cache</filename>, thus obsoleting some former steps
+      that suggested that it was needful to delete them.
+    </para>
   </listitem>
 
   <listitem>

Modified: traduc/branches/slony_1_2/reshape.xml
===================================================================
--- traduc/branches/slony_1_2/reshape.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/reshape.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -63,6 +63,15 @@
       </para>
     </listitem>
 
+    <listitem>
+      <para>
+        After performing the configuration change, you should, as <xref
+	linkend="bestpractices"/>, run the &lteststate; scripts in order to
+	validate that the cluster state remains in good order after this
+	change.
+      </para>
+    </listitem>
+
   </itemizedlist>
 
 </para>

Modified: traduc/branches/slony_1_2/slon.xml
===================================================================
--- traduc/branches/slony_1_2/slon.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/slon.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -67,10 +67,13 @@
 	  
           <para>
 	    Les cinq premiers niveaux de débogage (de Fatal à Info) sont
-	    <emphasis>toujours</emphasis> affichés dans les traces. Si
-            <envar>log_level</envar> est configuré à 2 (un choix routinier et,
-	    généralement, préférable), alors les messages de niveaux de
-	    débogage 1 et 2 seront aussi envoyés.
+	    <emphasis>toujours</emphasis> affichés dans les traces. In
+            early versions of &slony1;, the <quote>suggested</quote>
+            <envar>log_level</envar> value was 2, which would list output at
+            all levels down to debugging level 2.  In &slony1; version 2, it
+            is recommended to set <envar>log_level</envar> to 0; most of the
+            consistently interesting log information is generated at levels
+            higher than that.
 	  </para>
         </listitem>
       </varlistentry>

Modified: traduc/branches/slony_1_2/slonconf.xml
===================================================================
--- traduc/branches/slony_1_2/slonconf.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/slonconf.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -102,11 +102,16 @@
       <listitem>
         <para>Niveau de traces de débogage (plus la valeur est haute, plus les
 	      messages sont verbeux).
-	      Valeurs possibles&nbsp;: de 0 à 4. Valeur par défaut&nbsp;: 2.</para>
+	      Valeurs possibles&nbsp;: de 0 à 4. Valeur par défaut&nbsp;: 0.</para>
 
 	      <para>Il y a <link linkend="nineloglevels">neuf niveaux de messages
 	      de trace</link>&nbsp;; en utilisant cette option, une partie ou l'ensemble
-	      des niveaux <quote>debug</quote> peut être désactivé.</para>
+	      des niveaux <quote>debug</quote> peut être désactivé. In &slony1; version 2, a lot of log message levels have
+	been revised in an attempt to ensure the <quote>interesting
+	stuff</quote> comes in at CONFIG/INFO levels, so that you
+	could run at level 0, omitting all of the <quote>DEBUG</quote>
+	messages, and still have meaningful contents in the
+	logs.</para>
       </listitem>
     </varlistentry>
     
@@ -133,6 +138,12 @@
       <listitem>
         <para>Détermine si l'horodatage de chaque événement doit
 	      apparaître dans chaque ligne du journal applicatif.</para>
+
+        <para> Note that if <envar>syslog</envar> usage is configured,
+        then this is ignored; it is assumed that
+        <application>syslog</application> will be supplying
+        timestamps, and timestamps are therefore suppressed.
+        </para>
       </listitem>
     </varlistentry>
 
@@ -291,6 +302,13 @@
         <para>Fréquence maximale (en millisecondes) de vérification des mises à jour.
 	  Valeurs possibles&nbsp;: de 10 à 60000, La valeur par défaut est 100.
         </para>
+
+        <para> This parameter is primarily of concern on nodes that
+          originate replication sets.  On a non-origin node, there
+          will never be update activity that would induce a SYNC;
+          instead, the timeout value described below will induce a
+          SYNC every so often <emphasis>despite absence of changes to
+          replicate.</emphasis> </para>
       </listitem>
     </varlistentry>
 
@@ -321,6 +339,41 @@
           <envar>sync_interval_timeout</envar>. 
 	  Valeurs possibles&nbsp;: [0-120000]. Valeur par défaut&nbsp;: 1000.
         </para>
+
+        <para> This parameter is likely to be primarily of concern on
+          nodes that originate replication sets, though it does affect
+          how often events are generated on other nodes.</para>
+
+	<para>
+          On a non-origin node, there never is activity to cause a
+          SYNC to get generated; as a result, there will be a SYNC
+          generated every <envar>sync_interval_timeout</envar>
+          milliseconds.  There are no subscribers looking for those
+          SYNCs, so these events do not lead to any replication
+          activity.  They will, however, clutter sl_event up a little,
+          so it would be undesirable for this timeout value to be set
+          too terribly low.  120000ms represents 2 minutes, which is
+          not a terrible value.
+        </para>
+
+	<para> The two values function together in varying ways: </para>
+
+	<para> On an origin node, <envar>sync_interval</envar> is
+	the <emphasis>minimum</emphasis> time period that will be
+	covered by a SYNC, and during periods of heavy application
+	activity, it may be that a SYNC is being generated
+	every <envar>sync_interval</envar> milliseconds. </para>
+
+	<para> On that same origin node, there may be quiet intervals,
+	when no replicatable changes are being submitted.  A SYNC will
+	be induced, anyways,
+	every <envar>sync_interval_timeout</envar>
+	milliseconds. </para>
+
+	<para> On a subscriber node that does not originate any sets,
+	only the <quote>timeout-induced</quote> SYNCs will
+	occur.  </para>
+
       </listitem>
     </varlistentry>
 
@@ -332,15 +385,19 @@
       </term>
       <listitem>
         <para>
-          Nombre maximum d'événements <command>SYNC</command> qui seront regroupés
-	  ensemble lorsqu'un n&oelig;ud abonné tombe en panne.
-	  Les événements <command>SYNC</command>s ne sont empaquetés 
-	  que s'ils sont nombreux et qu'ils sont contiguës.
-	  S'il n'y qu'un seul événement <command>SYNC</command> disponible,
-	  même l'option <option>-g60</option> s'appliquera à cet évènement unique.
-	  Dès qu'un n&oelig;ud abonné rattrape son retard, il appliquera chaque événement
-	  <command>SYNC</command> individuellement.
-	  Valeurs possibles&nbsp;: [0,10000]. Valeur par défaut&nbsp;: 20.
+          Maximum number of <command>SYNC</command> events that a
+          subscriber node will group together when/if a subscriber
+          falls behind.  <command>SYNC</command>s are batched only if
+          there are that many available and if they are
+          contiguous.  Every other event type in between leads to a
+          smaller batch.  And if there is only
+          one <command>SYNC</command> available, even though you used
+          <option>-g600</option>, the &lslon; will apply just the one
+          that is available.  As soon as a subscriber catches up, it
+          will tend to apply each
+          <command>SYNC</command> by itself, as a singleton, unless
+          processing should fall behind for some reason.  Range:
+          [0,10000], default: 20
         </para>
       </listitem>
     </varlistentry>
@@ -361,6 +418,35 @@
       </listitem>
     </varlistentry>
 
+    <varlistentry id="slon-config-cleanup-interval" xreflabel="slon_config_cleanup_interval">
+      <term><varname>cleanup_interval</varname> (<type>interval</type>)</term>
+      <indexterm>
+        <primary><varname>cleanup_interval</varname> configuration parameter</primary>
+      </indexterm>
+      <listitem>
+        <para>
+          Controls how quickly old events are trimmed out.  That
+          subsequently controls when the data in the log tables,
+          <envar>sl_log_1</envar> and <envar>sl_log_2</envar>, get
+          trimmed out.  Default: '10 minutes'.
+        </para>
+      </listitem>
+    </varlistentry>
+
+    <varlistentry id="slon-config-cleanup-deletelogs" xreflabel="slon_conf_cleanup_deletelogs">
+      <term><varname>cleanup_deletelogs</varname> (<type>boolean</type>)</term>
+      <indexterm>
+        <primary><varname>cleanup_deletelogs</varname> configuration parameter</primary>
+      </indexterm>
+      <listitem>
+        <para>
+          Controls whether or not we use DELETE to trim old data from the log tables,
+          <envar>sl_log_1</envar> and <envar>sl_log_2</envar>.
+          Default: false
+        </para>
+      </listitem>
+    </varlistentry>
+
     <varlistentry id="slon-config-desired-sync-time" xreflabel="desired_sync_time">
       <term><varname>desired_sync_time</varname>  (<type>entier</type>)
       <indexterm>

Modified: traduc/branches/slony_1_2/slonik_ref.xml
===================================================================
--- traduc/branches/slony_1_2/slonik_ref.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/slonik_ref.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -83,6 +83,23 @@
 
          <para>Ces commandes sont regroupées ensemble au sein d'une transaction
            pour chaque n&oelig;ud participant.</para>
+
+     <para> Note that this does not enforce grouping of the actions as
+     a single transaction on all nodes.  For instance, consider the
+     following slonik code:</para>
+     <programlisting>
+     try {
+         execute script (set id = 1, filename = '/tmp/script1.sql', event node=1);
+         execute script (set id = 1, filename = '/tmp/script2.sql', event node=1);
+     }
+     </programlisting>
+
+     <para> This <emphasis>would</emphasis> be processed within a
+     single BEGIN/COMMIT on node 1.  However, the requests are
+     separated into two <command>DDL_SCRIPT</command> events so that
+     each will be run individually, in separate transactions, on other
+     nodes in the cluster. </para>
+
       </sect3>
     </sect2>
   </sect1>
@@ -100,7 +117,7 @@
   </partintro>
   <!-- **************************************** -->
   <refentry id ="stmtinclude">
-    <refmeta><refentrytitle>INCLUDE</refentrytitle><manvolnum>7</manvolnum></refmeta>
+    <refmeta><refentrytitle>SLONIK INCLUDE</refentrytitle><manvolnum>7</manvolnum></refmeta>
     <refnamediv>
       <refname>INCLUDE</refname>
       <refpurpose>insérer du code slonik à partir d'un autre fichier</refpurpose>
@@ -130,7 +147,7 @@
     </refsect1>
   </refentry>
   <!-- **************************************** -->
-  <refentry id ="stmtdefine"><refmeta><refentrytitle>DEFINE</refentrytitle><manvolnum>7</manvolnum></refmeta>
+  <refentry id ="stmtdefine"><refmeta><refentrytitle>SLONIK DEFINE</refentrytitle><manvolnum>7</manvolnum></refmeta>
     <refnamediv><refname>DEFINE</refname>
       <refpurpose>Définir un nom symbolique</refpurpose>
     </refnamediv>
@@ -193,7 +210,7 @@
     système de réplication, mais affecte l'exécution du script tout entier.</para>
   </partintro>
   <refentry id ="clustername">
-    <refmeta><refentrytitle>CLUSTER NAME</refentrytitle><manvolnum>7</manvolnum></refmeta>
+    <refmeta><refentrytitle>SLONIK CLUSTER NAME</refentrytitle><manvolnum>7</manvolnum></refmeta>
     <refnamediv>
       <refname>CLUSTER NAME</refname>
       <refpurpose>préambule - identifier le cluster &slony1;</refpurpose>
@@ -232,7 +249,7 @@
     </refsect1>
   </refentry>
   <refentry id ="admconninfo">
-    <refmeta><refentrytitle>ADMIN CONNINFO</refentrytitle><manvolnum>7</manvolnum></refmeta>
+    <refmeta><refentrytitle>SLONIK ADMIN CONNINFO</refentrytitle><manvolnum>7</manvolnum></refmeta>
     <refnamediv>
       <refname>ADMIN CONNINFO</refname>
       <refpurpose>preambule - identifier la base &postgres;</refpurpose>
@@ -291,7 +308,7 @@
   <title>Commande de configuration et d'action</title>  
   <refentry id ="stmtecho">
     <refmeta>
-      <refentrytitle>ECHO</refentrytitle><manvolnum>7</manvolnum></refmeta>
+      <refentrytitle>SLONIK ECHO</refentrytitle><manvolnum>7</manvolnum></refmeta>
       <refnamediv>
         <refname>ECHO</refname>
         <refpurpose>Outil générique de sortie</refpurpose>
@@ -318,7 +335,7 @@
   
   <!-- **************************************** -->
   
-  <refentry id ="stmtexit"><refmeta><refentrytitle>EXIT</refentrytitle>
+  <refentry id ="stmtexit"><refmeta><refentrytitle>SLONIK EXIT</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>EXIT</refname>
@@ -353,7 +370,7 @@
   <!-- **************************************** -->
   <refentry id="stmtinitcluster">
    <refmeta>
-    <refentrytitle>INIT CLUSTER</refentrytitle>
+    <refentrytitle>SLONIK INIT CLUSTER</refentrytitle>
      <manvolnum>7</manvolnum>
    </refmeta>
    <refnamediv>
@@ -416,6 +433,13 @@
     <para>Cette commande crée un nouveau schéma et configure les
       tables à l'intérieur&nbsp;; aucun objet public ne doit être verrouillé
       pendant l'exécution de cette commande.</para>
+
+   <note> <para> Be aware that some objects are created that contain
+   the cluster name as part of their name.  (Notably, partial indexes
+   on <envar>sl_log_1</envar> and <envar>sl_log_2</envar>.)  As a
+   result, <emphasis>really long</emphasis> cluster names are a bad
+   idea, as they can make object names <quote>blow up</quote> past the
+   typical maximum name length of 63 characters. </para> </note>
    </refsect1>
    <refsect1> <title>Note de version</title>
     <para>Cette commande fut introduite dans &slony1; 1.0.</para>
@@ -424,7 +448,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id ="stmtstorenode"><refmeta><refentrytitle>STORE NODE</refentrytitle>
+  <refentry id ="stmtstorenode"><refmeta><refentrytitle>SLONIK STORE NODE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>STORE NODE</refname>
@@ -449,7 +473,10 @@
      
      <variablelist>
       <varlistentry><term><literal>ID = ival</literal></term>
-      <listitem><para>L'identifiant numérique et unique du nouveau n&oelig;ud.</para></listitem>
+      <listitem><para>L'identifiant numérique, immutable et unique du nouveau n&oelig;ud.</para>
+<para> Note that the ID is <emphasis>immutable</emphasis>
+      because it is used as the basis for inter-node event
+      communications. </para>      </listitem>
       </varlistentry>
       
       <varlistentry><term><literal> COMMENT = 'description' </literal></term>
@@ -462,14 +489,19 @@
        <listitem><para>Spécifie qu'un n&oelig;ud est un n&oelig;ud virtuel de récupération
 	   pour l'archivage de journaux de réplication. Si ce paramètre est à true,
 	   <application>slonik</application> n'essaiera pas d'initialiser la base de 
-	   donnée avec le schéma de réplication.</para></listitem>
+	   donnée avec le schéma de réplication.</para>
+       <warning><para> Never use the SPOOLNODE value - no released
+       version of &slony1; has ever behaved in the fashion described
+       in the preceding fashion.  Log shipping, as it finally emerged
+       in 1.2.11, does not require initializing <quote>spool
+       nodes</quote>.</para> </warning> </listitem>
        
       </varlistentry>
       <varlistentry><term><literal>EVENT NODE = ival</literal></term>
        
        <listitem><para>L'identifiant du n&oelig;ud utilisé pour créer l'événement de configuration,
 	   qui prévient tous les n&oelig;uds existants de l'arrivée du nouveau n&oelig;ud.
-	   La valeur par défaut est 1.</para></listitem>
+	   </para></listitem>
       </varlistentry>
      </variablelist>
     </para>
@@ -479,7 +511,7 @@
    </refsect1>
    <refsect1><title>Exemple</title>
     <programlisting>
-     STORE NODE ( ID = 2, COMMENT = 'N&oelig;ud 2');
+     STORE NODE ( ID = 2, COMMENT = 'N&oelig;ud 2', EVENT NODE = 1 );
     </programlisting>
    </refsect1>
    <refsect1> <title>Utilisation de verrous</title>
@@ -493,12 +525,15 @@
      <para>Cette commande fut introduite dans &slony1; 1.0. Le paramètre <envar>SPOOLNODE</envar>
      fut introduit dans la version 1.1 mais n'était pas implémentée dans cette version.
      La fonctionnalité <envar>SPOOLNODE</envar> est arrivée dans la
-   version 1.2.</para>
+   version 1.2., but <envar>SPOOLNODE</envar> was not used
+   for this purpose.  In later versions, hopefully
+   <envar>SPOOLNODE</envar> will be unavailable. </para>
+   <para> In version 2.0, the default value for <envar>EVENT NODE</envar> was removed, so a node must be specified.</para>
    </refsect1>
   </refentry>
   
 <!-- **************************************** -->
-  <refentry id="stmtdropnode"><refmeta><refentrytitle>DROP NODE</refentrytitle>
+  <refentry id="stmtdropnode"><refmeta><refentrytitle>SLONIK DROP NODE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP NODE</refname>
@@ -524,7 +559,7 @@
        <listitem><para>L'identifiant du n&oelig;ud à supprimer.</para></listitem>
       </varlistentry>
       <varlistentry><term><literal> EVENT NODE = ival </literal></term>
-       <listitem><para>L'identifiant du n&oelig;ud qui génère l'événement. La valeur par défaut est 1.
+       <listitem><para>L'identifiant du n&oelig;ud qui génère l'événement.
        </para></listitem>
       </varlistentry>
      </variablelist>
@@ -538,7 +573,7 @@
    </refsect1>
    <refsect1><title>Exemple</title>
     <programlisting>
-     DROP NODE ( ID = 2 );
+     DROP NODE ( ID = 2, EVENT NODE = 1 );
     </programlisting>
    </refsect1>
    <refsect1> <title>Utilisation de verrous</title>
@@ -565,11 +600,12 @@
 
    <refsect1> <title>Note de version</title>
     <para>Cette commande fut introduite dans &slony1; 1.0.</para>
+   <para> In version 2.0, the default value for <envar>EVENT NODE</envar> was removed, so a node must be specified.</para>
    </refsect1>
   </refentry>
 
 <!-- **************************************** -->
-  <refentry id="stmtuninstallnode"><refmeta><refentrytitle>UNINSTALL NODE</refentrytitle>
+  <refentry id="stmtuninstallnode"><refmeta><refentrytitle>SLONIK UNINSTALL NODE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>UNINSTALL NODE</refname>
@@ -632,7 +668,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtrestartnode"><refmeta><refentrytitle>RESTART NODE</refentrytitle>
+  <refentry id="stmtrestartnode"><refmeta><refentrytitle>SLONIK RESTART NODE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>RESTART NODE</refname>
@@ -641,14 +677,14 @@
    
    <refsynopsisdiv>
     <cmdsynopsis>
-     <command>RESTART NODE (options);</command>
+     <command>RESTART NODE options;</command>
     </cmdsynopsis>
    </refsynopsisdiv>
    <refsect1>
     <title>Description</title>
     
     <para> Provoque l'arrêt et le redémarrage d'un démon 
-      de réplication sur le n&oelig;ud spécifié.
+      de réplication (<application>slon</application> process) sur le n&oelig;ud spécifié.
       Théoriquement, cette commande est obsolète. En pratique,
       les délais TCP peuvent retarder les changements critiques 
       de configuration jusqu'à ce qu'il soit effectué alors que le
@@ -673,15 +709,17 @@
     <para>Aucun verrouillage ne devrait être visible depuis l'application.</para>
    </refsect1>
    <refsect1> <title>Note de version</title>
-    <para>Cette commande fut introduite dans &slony1; 1.0&nbsp;;
-      Elle ne devrait plus être nécessaire à partir de la version 1.0.5.</para>
+    <para>Cette commande fut introduite dans &slony1; 1.0&nbsp;;frequent use became unnecessary as
+   of version 1.0.5.  There are, however, occasional cases where it is
+   necessary to interrupt a <application>slon</application> process,
+   and this allows this to be scripted via slonik. </para>
    </refsect1>
   </refentry>
   
 
   <!-- **************************************** -->
 
-  <refentry id="stmtstorepath"><refmeta><refentrytitle>STORE
+  <refentry id="stmtstorepath"><refmeta><refentrytitle>SLONIK STORE
      PATH</refentrytitle><manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>STORE PATH</refname>
@@ -759,7 +797,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtdroppath"><refmeta><refentrytitle>DROP PATH</refentrytitle>
+  <refentry id="stmtdroppath"><refmeta><refentrytitle>SLONIK DROP PATH</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP PATH</refname>
@@ -810,7 +848,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtstorelisten"><refmeta><refentrytitle>STORE LISTEN</refentrytitle>
+  <refentry id="stmtstorelisten"><refmeta><refentrytitle>SLONIK STORE LISTEN</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>STORE LISTEN</refname>
@@ -882,7 +920,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtdroplisten"><refmeta><refentrytitle>DROP LISTEN</refentrytitle>
+  <refentry id="stmtdroplisten"><refmeta><refentrytitle>SLONIK DROP LISTEN</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP LISTEN</refname>
@@ -937,7 +975,7 @@
 
 <!-- **************************************** -->
 
-<refentry id="stmttableaddkey"><refmeta><refentrytitle>TABLE ADD KEY</refentrytitle>
+<refentry id="stmttableaddkey"><refmeta><refentrytitle>SLONIK TABLE ADD KEY</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>TABLE ADD KEY</refname>
@@ -963,7 +1001,8 @@
     </para>
 
     <para>
-     En dernier recours, cette commande peut être utilisée pour ajouter 
+     En dernier recours, <emphasis>in versions of &slony1; prior to
+     2.0</emphasis>, cette commande peut être utilisée pour ajouter 
      un attribut à une table qui ne possède par de clef primaire.
      Sachant que cette modification peut avoir des effets secondaires
      indésirables, <emphasis>il est très fortement recommandé que les 
@@ -971,6 +1010,12 @@
        leurs propres moyens</emphasis>.
     </para>
 
+   <para> If you intend to use &slony1; version 2.0, you
+   <emphasis>must</emphasis> arrange for a more proper primary key.
+   &slony1; will not provide one for you, and if you have cases of
+   keys created via <command>TABLE ADD KEY</command>, you cannot
+   expect &slony1; to function properly. </para>
+
     <variablelist>
      <varlistentry><term><literal>NODE ID = ival</literal></term>
       <listitem><para>Identifiant du n&oelig;ud de l'ensemble de réplication d'origine
@@ -1037,13 +1082,23 @@
    </refsect1>
    <refsect1> <title>Note de version</title>
     <para>Cette commande fut introduite dans &slony1; 1.0.</para>
+<warning>    <para> This command is <emphasis> no longer supported </emphasis>
+    as of &slony1; version 2.0.  In version 2, the various
+    <quote>catalogue breakages</quote> done in &postgres; versions
+    prior to 8.3 are being eliminated so that schema dumps may be
+    taken from any node.  That leaves the <quote>kludgy</quote>
+    columns created via <command>TABLE ADD KEY</command> as the only
+    thing that prevents <xref linkend="stmtuninstallnode"/> from being
+    comprised of the SQL statement <command>drop schema _ClusterName
+    cascade;</command>.</para> </warning>
+
    </refsect1>
   </refentry>
   
 
 <!-- **************************************** -->
 
-  <refentry id="stmtcreateset"><refmeta><refentrytitle>CREATE SET</refentrytitle>
+  <refentry id="stmtcreateset"><refmeta><refentrytitle>SLONIK CREATE SET</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>CREATE SET</refname>
@@ -1113,7 +1168,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtdropset"><refmeta><refentrytitle>DROP SET</refentrytitle>
+  <refentry id="stmtdropset"><refmeta><refentrytitle>SLONIK DROP SET</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>DROP SET</refname>
@@ -1165,7 +1220,7 @@
  
 <!-- **************************************** -->
 
-  <refentry id="stmtmergeset"><refmeta><refentrytitle>MERGE
+  <refentry id="stmtmergeset"><refmeta><refentrytitle>SLONIK MERGE
      SET</refentrytitle><manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>MERGE SET</refname>
@@ -1247,7 +1302,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtsetaddtable"><refmeta><refentrytitle>SET ADD TABLE</refentrytitle>
+  <refentry id="stmtsetaddtable"><refmeta><refentrytitle>SLONIK SET ADD TABLE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET ADD TABLE</refname>
@@ -1272,7 +1327,7 @@
       </varlistentry>
       <varlistentry><term><literal>ORIGIN = ival</literal></term>
        <listitem><para>N&oelig;ud origine de l'ensemble. Les prochaines versions de <application>slonik</application>
-	 devraient pouvoir deviner cette information.</para></listitem>
+	 devraient pouvoir deviner cette information, but there do lie race conditions, there.</para></listitem>
       </varlistentry>
       <varlistentry><term><literal>ID = ival</literal></term>
 
@@ -1286,7 +1341,16 @@
 
          <para>Cet identifiant doit être unique pour tous les ensembles de réplication&nbsp;;
 	 vous ne devez pas avoir deux tables du même cluster avec le même identifiant.
-	  </para></listitem>
+	  </para>
+	 <para> Note that &slony1; generates an in-memory array
+	 indicating all of the fully qualified table names; if you use
+	 large table ID numbers, the sparsely-utilized array can lead
+	 to substantial wastage of memory.  Each potential table ID
+	 consumes a pointer to a char, commonly costing 4 bytes per
+	 table ID on 32 bit architectures, and 8 bytes per table ID on
+	 64 bit architectures. </para>
+
+	  </listitem>
       </varlistentry>
       <varlistentry><term><literal>FULLY QUALIFIED NAME = 'string'</literal></term>
        <listitem><para>Le nom complet de la table tel que décrit dans
@@ -1376,9 +1440,13 @@
        <varlistentry><term><literal>Slony-I: cannot add table to currently subscribed set 1</literal></term>
 
         <listitem><para>&slony1; ne peut pas ajouter des tables dans un ensemble qui est 
-	en cours de réplication. Pour contourner ce problème, vous devez définir un nouvel ensemble
-	qui contiendra les nouvelles tables.</para> </listitem> </varlistentry>
+	en cours de réplication.         Instead, you need to define a new replication set, and add any
+        new tables to <emphasis>that</emphasis> set.  You might then
+        use <xref linkend="stmtmergeset"/> to merge the new set into an
+        existing one, if that seems appropriate. </para> </listitem>
+        </varlistentry>
 
+
    </variablelist>    
 
    </refsect1>
@@ -1395,7 +1463,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtsetaddsequence"><refmeta><refentrytitle>SET ADD SEQUENCE</refentrytitle>
+  <refentry id="stmtsetaddsequence"><refmeta><refentrytitle>SLONIK SET ADD SEQUENCE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET ADD SEQUENCE</refname>
@@ -1467,7 +1535,7 @@
   
 <!-- **************************************** -->
 
-  <refentry id="stmtsetdroptable"><refmeta><refentrytitle>SET DROP TABLE</refentrytitle>
+  <refentry id="stmtsetdroptable"><refmeta><refentrytitle>SLONIK SET DROP TABLE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET DROP TABLE</refname>
@@ -1525,7 +1593,7 @@
   
 <!-- **************************************** -->
 
-  <refentry id="stmtsetdropsequence"><refmeta><refentrytitle>SET DROP SEQUENCE</refentrytitle>
+  <refentry id="stmtsetdropsequence"><refmeta><refentrytitle>SLONIK SET DROP SEQUENCE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET DROP SEQUENCE</refname>
@@ -1561,7 +1629,7 @@
     <programlisting>
      SET DROP SEQUENCE (
      ORIGIN = 1,
-     ID = 20,
+     ID = 20
      );
 </programlisting>
    </refsect1>
@@ -1576,7 +1644,7 @@
   
 <!-- **************************************** -->
   
-  <refentry id="stmtsetmovetable"><refmeta><refentrytitle>SET MOVE
+  <refentry id="stmtsetmovetable"><refmeta><refentrytitle>SLONIK SET MOVE
      TABLE</refentrytitle><manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET MOVE TABLE</refname>
@@ -1639,7 +1707,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtsetmovesequence"><refmeta><refentrytitle>SET MOVE SEQUENCE</refentrytitle>
+  <refentry id="stmtsetmovesequence"><refmeta><refentrytitle>SLONIK SET MOVE SEQUENCE</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SET MOVE SEQUENCE</refname>
@@ -1709,7 +1777,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtstoretrigger"><refmeta><refentrytitle>STORE TRIGGER</refentrytitle>
+  <refentry id="stmtstoretrigger"><refmeta><refentrytitle>SLONIK STORE TRIGGER</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>STORE TRIGGER</refname>
@@ -1751,6 +1819,18 @@
       </varlistentry>
      </variablelist>
     </para>
+    <note><para> A nifty trick is that you can run <command>STORE
+    TRIGGER</command> <emphasis>before the trigger is
+    installed;</emphasis> that will not cause any errors.  You could
+    thus add &slony1;'s handling of the trigger
+    <emphasis>before</emphasis> it is installed.  That allows you to
+    be certain that it becomes active on all nodes immediately upon
+    its installation via <xref linkend="stmtddlscript"/>; there is no
+    risk of events getting through in between the <command>EXECUTE
+    SCRIPT</command> and <command>STORE TRIGGER</command>
+    events. </para>
+    </note>
+
     <para>Cette commande utilise &funstoretrigger;.</para>
    </refsect1>
    <refsect1><title>Exemple</title>
@@ -1776,7 +1856,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtdroptrigger"><refmeta><refentrytitle>DROP TRIGGER</refentrytitle>
+  <refentry id="stmtdroptrigger"><refmeta><refentrytitle>SLONIK DROP TRIGGER</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>DROP TRIGGER</refname>
@@ -1831,16 +1911,20 @@
 
     <para>Cette opération pose brièvement  un verrou exclusif sur la table spécifiée
 sur chaque n&oelig;ud auquel elle s'applique, afin de modifier le schéma de la table
-et y ajouter de nouveau le trigger.
+et y ajouter de nouveau le trigger, but (hopefully!) only briefly.
     </para>
    </refsect1>
    <refsect1> <title>Note de version</title>
     <para>Cette commande fut introduite dans &slony1; 1.0.</para>
+    <para> In &slony1; version 2.0, this command is removed as
+    obsolete because triggers are no longer <quote>messed around
+    with</quote> in the system catalogue. </para>
+
    </refsect1>
   </refentry>
   
 <!-- **************************************** -->
-  <refentry id="stmtsubscribeset"><refmeta><refentrytitle>SUBSCRIBE SET</refentrytitle>
+  <refentry id="stmtsubscribeset"><refmeta><refentrytitle>SLONIK SUBSCRIBE SET</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>SUBSCRIBE SET</refname>
@@ -1960,7 +2044,10 @@
        
        <listitem><para>Indique si le nouvel abonné doit stocker les logs
 pendant la réplication afin de pouvoir devenir fournisseur pour de futurs 
-n&oelig;uds.</para></listitem>
+n&oelig;uds.  Any
+       node that is intended to be a candidate for FAILOVER
+       <emphasis>must</emphasis> have <command>FORWARD =
+       yes</command>.</para></listitem>
 
       </varlistentry>
      </variablelist>
@@ -2071,7 +2158,7 @@
   
 <!-- **************************************** -->
 
-  <refentry id="stmtunsubscribeset"><refmeta><refentrytitle>UNSUBSCRIBE SET</refentrytitle>
+  <refentry id="stmtunsubscribeset"><refmeta><refentrytitle>SLONIK UNSUBSCRIBE SET</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>UNSUBSCRIBE SET</refname>
@@ -2136,7 +2223,7 @@
   
 <!-- **************************************** -->
 
-  <refentry id ="stmtlockset"><refmeta><refentrytitle>LOCK SET</refentrytitle>
+  <refentry id ="stmtlockset"><refmeta><refentrytitle>SLONIK LOCK SET</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>LOCK SET</refname>
@@ -2211,7 +2298,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtunlockset"><refmeta><refentrytitle>UNLOCK SET</refentrytitle>
+  <refentry id="stmtunlockset"><refmeta><refentrytitle>SLONIK UNLOCK SET</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>UNLOCK SET</refname>
@@ -2264,7 +2351,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtmoveset"><refmeta><refentrytitle>MOVE SET</refentrytitle>
+  <refentry id="stmtmoveset"><refmeta><refentrytitle>SLONIK MOVE SET</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>MOVE SET</refname>
@@ -2354,7 +2441,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtfailover"><refmeta><refentrytitle>FAILOVER</refentrytitle>
+  <refentry id="stmtfailover"><refmeta><refentrytitle>SLONIK FAILOVER</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>FAILOVER</refname>
@@ -2429,15 +2516,24 @@
      <xref linkend="stmtmoveset"/> car elle n'abandonne
     <emphasis>pas</emphasis> le n&oelig;ud en panne.
     </para>
+    <para> If there are many nodes in a cluster, and failover includes
+    dropping out additional nodes (<emphasis>e.g.</emphasis> when it
+    is necessary to treat <emphasis>all</emphasis> nodes at a site
+    including an origin as well as subscribers as failed), it is
+    necessary to carefully sequence the actions, as described in <xref
+    linkend="complexfailover"/>.
+    </para>
+
    </refsect1>
    <refsect1> <title>Note de version</title>
     <para>Cette commande fut introduite dans &slony1; 1.0.</para>
+    <para> In version 2.0, the default <envar>BACKUP NODE</envar> value of 1 was removed, so it is mandatory to provide a value for this parameter.</para>
    </refsect1>
   </refentry>
 
 <!-- **************************************** -->
 
-  <refentry id="stmtddlscript"><refmeta><refentrytitle>EXECUTE SCRIPT</refentrytitle>
+  <refentry id="stmtddlscript"><refmeta><refentrytitle>SLONIK EXECUTE SCRIPT</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>EXECUTE SCRIPT</refname>
@@ -2485,13 +2581,13 @@
       
      </varlistentry>
      <varlistentry><term><literal>EVENT NODE = ival</literal></term>
-      <listitem><para>(Optionnel) L'identifiant de l'origine courante de l'ensemble. La valeur par défaut est 1.</para></listitem>
+      <listitem><para>(Obligatoire) L'identifiant de l'origine courante de l'ensemble.</para></listitem>
       
      </varlistentry>
      <varlistentry><term><literal>EXECUTE ONLY ON = ival</literal></term>
      <listitem><para>(Optionnel) L'identifiant du seul n&oelig;ud qui
-doit exécuter le script. Cette option implique que le script sera propagé, par <xref linkend="slonik"/>,
-       <emphasis>seulement</emphasis> sur le seul n&oelig;ud spécifié.
+    doit exécuter le script. Cette option implique que le script est propagé sur
+tous les n&oelig;uds mais exécuté seulement sur un.
 Par défaut, on exécute le script sur tous les n&oelig;uds abonnés à l'ensemble de réplication.
 	</para></listitem> 
       
@@ -2503,7 +2599,7 @@
     <para>Notons qu'il s'agit d'une opération &rlocking;, ce qui signifie qu'elle peut être
 bloquée par l'activité d'une autre base.</para>
      
-    <para>Au démarrage de cet événement, toutes les tables répliquées sont
+    <para>In versions up to (and including) the 1.2 branch, au démarrage de cet événement, toutes les tables répliquées sont
 déverrouillées par la fonction <function>alterTableRestore(tab_id)</function>. 
 Une fois le script SQL exécuté, elles sont remises en <quote>mode réplication
 </quote> avec <function>alterTableForReplication(tab_id)</function>.  
@@ -2511,8 +2607,9 @@
 &slon; pendant la durée du script SQL.</para>
 
     <para>Si les colonnes d'une table sont modifiées, il est très
-important que les triggers soient régénérés, sinon ils peuvent 
-être inadaptés à la nouvelle forme du schéma.
+important que les triggers soient régénérés (<emphasis>e.g.</emphasis> - by
+    <function>alterTableForReplication(tab_id)</function>), sinon les attributes in the <function>logtrigger()</function> trigger
+peuvent être inadaptés à la nouvelle forme du schéma.
     </para>
 
     <para>Notez que si vous devez faire référence au nom du cluster,
@@ -2527,14 +2624,14 @@
     <programlisting>
 EXECUTE SCRIPT (
    SET ID = 1,
-   FILENAME = '/tmp/changes_2004-05-01.sql',
+   FILENAME = '/tmp/changes_2008-04-01.sql',
    EVENT NODE = 1
 );
     </programlisting>
    </refsect1>
    <refsect1> <title>Utilisation de verrous</title>
 
-    <para>Un verrou exclusif est posé 
+    <para>Up until the 2.0 branch, un verrou exclusif est posé 
 sur chaque table répliquée sur le n&oelig;ud origine, afin de retirer les triggers
 de réplication. Une fois le script DDL achevé, ces verrous sont enlevés.
      </para>
@@ -2544,6 +2641,13 @@
 les triggers des tables répliquées.
     </para>
 
+    <para> As of the 2.0 branch, &slony1; uses a GUC that controls
+    trigger behaviour, which allows deactivating the &slony1;-created
+    triggers during this operation <emphasis>without</emphasis> the
+    need to take out exclusive locks on all tables.  Now, the only
+    tables requiring exclusive locks are those tables that are
+    actually altered as a part of the DDL script. </para>
+
    </refsect1>
    <refsect1> <title>Note de version</title>
     <para>Cette commande fut introduite dans &slony1; 1.0.</para>
@@ -2574,12 +2678,15 @@
 que les triggers sont retirés au départ et restaurés à la fin).
 Ceci couvre les risques lorsqu'on lance une requête de changements DDL
 sur des tables appartenant à plusieurs ensemble de réplication.</para>
+
+   <para> In version 2.0, the default value for <envar>EVENT
+   NODE</envar> was removed, so a node must be specified.</para>
    </refsect1>
   </refentry>
 
 <!-- **************************************** -->
 
-  <refentry id="stmtupdatefunctions"><refmeta><refentrytitle>UPDATE FUNCTIONS</refentrytitle>
+  <refentry id="stmtupdatefunctions"><refmeta><refentrytitle>SLONIK UPDATE FUNCTIONS</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>UPDATE FUNCTIONS</refname>
@@ -2649,7 +2756,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtwaitevent"><refmeta><refentrytitle>WAIT FOR EVENT</refentrytitle>
+  <refentry id="stmtwaitevent"><refmeta><refentrytitle>SLONIK WAIT FOR EVENT</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>WAIT FOR EVENT</refname>
@@ -2672,8 +2779,8 @@
  <command>CREATE SET</command>) soient traités sur un autre n&oelig;ud avant de 
 lancer d'autres commandes (par exemple <xref linkend="stmtsubscribeset"/>).  
 <command>WAIT FOR EVENT</command> peut être utilisé pour demander à un 
-script <application>slonik</application> d'attendre jusqu'à ce que le n&oelig;ud abonné
-soit prêt pour l'action suivante.
+script <application>slonik</application> d'attendre jusqu'à ce que confirmation of an event, which hopefully means that
+    the subscriber node is ready for the next action.
     </para>
     
     <para><command>WAIT FOR EVENT</command> doit être appelée en dehors d'un
@@ -2694,7 +2801,7 @@
       </varlistentry>
       <varlistentry><term><literal>WAIT ON = ival</literal></term>
        <listitem><para>L'identifiant du n&oelig;ud où la table  &slconfirm; est vérifiée.
-La valeur par défaut est 1.</para></listitem>
+       </para></listitem>
        
       </varlistentry>
       <varlistentry><term><literal>TIMEOUT = ival</literal></term>
@@ -2722,6 +2829,9 @@
    </refsect1>
    <refsect1> <title>Note de version</title>
     <para>Cette commande fut introduite dans &slony1; 1.0.</para>
+
+   <para> In version 2.0, the default value for <envar>WAIT ON</envar>
+   was removed, so a node must be specified.</para>
    </refsect1>
    
    <refsect1> <title>Bizarreries</title> 
@@ -2744,11 +2854,11 @@
     <programlisting>
      # Supposons que l'ensemble 1 a deux abonnés direct 2 et 3
      SUBSCRIBE SET (ID = 999, PROVIDER = 1, RECEIVER = 2);
-     SYNC (ID=1);
-     WAIT FOR EVENT (ORIGIN = 1, CONFIRMED = 2, WAIT ON=1);
+     WAIT FOR EVENT (ORIGIN = 1, CONFIRMED = ALL, WAIT ON=1);
      SUBSCRIBE SET (ID = 999, PROVIDER = 1, RECEIVER = 3);
+     WAIT FOR EVENT (ORIGIN = 1, CONFIRMED = ALL, WAIT ON=1);
      SYNC (ID=1);
-     WAIT FOR EVENT (ORIGIN = 1, CONFIRMED = 3, WAIT ON=1);
+     WAIT FOR EVENT (ORIGIN = 1, CONFIRMED = ALL, WAIT ON=1);
      MERGE SET ( ID = 1, ADD ID = 999, ORIGIN = 1 );
     </programlisting>
    </refsect1>
@@ -2757,7 +2867,7 @@
 
 <!-- **************************************** -->
 
-  <refentry id="stmtrepairconfig"><refmeta><refentrytitle>REPAIR CONFIG</refentrytitle>
+  <refentry id="stmtrepairconfig"><refmeta><refentrytitle>SLONIK REPAIR CONFIG</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>REPAIR CONFIG</refname>
@@ -2813,7 +2923,7 @@
   </refentry>
 <!-- **************************************** -->
 
-  <refentry id="stmtsync"><refmeta><refentrytitle>SYNC</refentrytitle>
+  <refentry id="stmtsync"><refmeta><refentrytitle>SLONIK SYNC</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
 
    <refnamediv><refname>SYNC</refname>
@@ -2856,7 +2966,7 @@
 
 <!-- **************************************** -->
   
-  <refentry id ="stmtsleep"><refmeta><refentrytitle>SLEEP</refentrytitle>
+  <refentry id ="stmtsleep"><refmeta><refentrytitle>SLONIK SLEEP</refentrytitle>
    <manvolnum>7</manvolnum></refmeta>
    
    <refnamediv><refname>SLEEP</refname>
@@ -2885,5 +2995,96 @@
    </refsect1>
   </refentry>
 
+
+
+  <refentry id ="stmtcloneprepare"><refmeta><refentrytitle>SLONIK CLONE PREPARE</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
+   
+   <refnamediv><refname>CLONE PREPARE</refname>
+    
+    <refpurpose> Prepare for cloning a node. </refpurpose>
+   </refnamediv>
+   <refsynopsisdiv>
+    <cmdsynopsis>
+     <command>clone prepare </command>
+     <arg><replaceable class="parameter"> id</replaceable></arg>
+     <arg><replaceable class="parameter"> provider</replaceable></arg>
+     <arg><replaceable class="parameter"> comment</replaceable></arg>
+    </cmdsynopsis>
+   </refsynopsisdiv>
+   <refsect1>
+    <title>Description</title>
+    <para>
+     Prepares for cloning a specified node.
+    </para>
+
+    <para>
+     This duplicates the <quote>provider</quote> node's configuration
+     under a new node ID in preparation for the node to be copied via
+     standard database tools.
+    </para>
+
+    <para> Note that in order that we be certain that this new node be
+    consistent with all nodes, it is important to issue a SYNC event
+    against every node aside from the provider and wait to start
+    copying the provider database at least until all those SYNC events
+    have been confirmed by the provider.  Otherwise, it is possible
+    for the clone to miss some events. </para>
+
+   </refsect1>
+   
+   <refsect1>
+   <title>Example</title>
+   <programlisting>
+     clone prepare (id = 33, provider = 22, comment='Clone 33');
+     sync (id=11);
+     sync (id=22);
+   </programlisting>
+   </refsect1>
+   
+   <refsect1> <title> Version Information </title>
+    <para> This command was introduced in &slony1; 2.0. </para>
+   </refsect1>
+  </refentry>
+
+
+  <refentry id ="stmtclonefinish"><refmeta><refentrytitle>SLONIK CLONE FINISH</refentrytitle>
+   <manvolnum>7</manvolnum></refmeta>
+   
+   <refnamediv><refname>CLONE FINISH</refname>
+    
+    <refpurpose> Complete cloning a node. </refpurpose>
+   </refnamediv>
+   <refsynopsisdiv>
+    <cmdsynopsis>
+     <command>clone prepare </command>
+     <arg><replaceable class="parameter"> id</replaceable></arg>
+     <arg><replaceable class="parameter"> provider</replaceable></arg>
+    </cmdsynopsis>
+   </refsynopsisdiv>
+   <refsect1>
+    <title>Description</title>
+    <para>
+     Finishes cloning a specified node.
+    </para>
+
+    <para>
+     This completes the work done by <xref
+     linkend="stmtcloneprepare"/>, establishing confirmation data for
+     the new <quote>clone</quote> based on the status found for the
+     <quote>provider</quote> node.
+    </para>
+   </refsect1>
+   
+   <refsect1><title>Example</title>
+   <programlisting>
+     clone finish (id = 33, provider = 22);
+   </programlisting>
+   </refsect1>
+   
+   <refsect1> <title> Version Information </title>
+    <para> This command was introduced in &slony1; 2.0. </para>
+   </refsect1>
+  </refentry>
   
  </reference>

Modified: traduc/branches/slony_1_2/slony.xml
===================================================================
--- traduc/branches/slony_1_2/slony.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/slony.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -50,12 +50,25 @@
   <!ENTITY slnode "<xref linkend='table.sl-node'/>">
   <!ENTITY sllog1 "<xref linkend='table.sl-log-1'/>">
   <!ENTITY sllog2 "<xref linkend='table.sl-log-2'/>">
+  <!ENTITY slseqlog "<xref linkend='table.sl-seqlog'/>">
   <!ENTITY slconfirm "<xref linkend='table.sl-confirm'/>">
+
+  <!ENTITY slevent "<xref linkend='table.sl-event'/>">
+  <!ENTITY slnode "<xref linkend='table.sl-node'/>">
+  <!ENTITY slpath "<xref linkend='table.sl-path'/>">
+  <!ENTITY sllisten "<xref linkend='table.sl-listen'/>">
+  <!ENTITY slregistry "<xref linkend='table.sl-registry'/>">
+<!ENTITY slsetsync "<xref linkend='table.sl-setsync'/>">
+  <!ENTITY slsubscribe "<xref linkend='table.sl-subscribe'/>">
+  <!ENTITY sltable "<xref linkend='table.sl-table'/>">
+  <!ENTITY slset "<xref linkend='table.sl-set'/>">
+
   <!ENTITY rplainpaths "<xref linkend='plainpaths'/>">
   <!ENTITY rlistenpaths "<xref linkend='listenpaths'/>">
   <!ENTITY pglistener "<envar>pg_listener</envar>">
   <!ENTITY lslon "<xref linkend='slon'/>">
   <!ENTITY lslonik "<xref linkend='slonik'/>">
+  <!ENTITY lteststate "<xref linkend='testslonystate'/>">
   <!ENTITY lfrenchtranslation "<xref linkend='frenchtranslation'/>"> 
 ]>
 
@@ -101,7 +114,9 @@
     &failover;
     &listenpaths;
     &plainpaths;
+    &triggers;
     &locking;
+    &raceconditions;
     &addthings;
     &dropthings;
     &logshipfile;
@@ -114,6 +129,8 @@
     &testbed;
     &loganalysis;
     &help;
+    &supportedplatforms;
+    &releasechecklist;
 
   </article>
 
@@ -145,8 +162,6 @@
   </part>
 
 
-  &supportedplatforms;
-  &releasechecklist;
   &schemadoc;
   <!-- &bookindex; -->
 

Modified: traduc/branches/slony_1_2/slonyupgrade.xml
===================================================================
--- traduc/branches/slony_1_2/slonyupgrade.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/slonyupgrade.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -132,4 +132,181 @@
   </varlistentry>
 </variablelist>
 
+
+<sect2> <title> TABLE ADD KEY issue in &slony1; 2.0 </title> 
+
+<para> Usually, upgrades between &slony1; versions have required no
+special attention to the condition of the existing replica.  That is,
+you fairly much merely need to stop &lslon;s, put new binaries in
+place, run <xref linkend="stmtupdatefunctions"/> against each node, and
+restart &lslon;s.  Schema changes have been internal to the cluster
+schema, and <xref linkend="stmtupdatefunctions"/> has been capable to
+make all of the needed alterations.  With version 2, this changes, if
+there are tables that used <xref linkend="stmttableaddkey"/>.  Version
+2 does not support the <quote>extra</quote> column, and
+<quote>fixing</quote> the schema to have a proper primary key is not
+within the scope of what <xref linkend="stmtupdatefunctions"/> can
+perform.  </para>
+
+<para> When upgrading from versions 1.0.x, 1.1.x, or 1.2.x to version
+2, it will be necessary to have already eliminated any such
+&slony1;-managed primary keys. </para>
+
+<para> One may identify the tables affected via the following SQL
+query: <command> select n.nspname, c.relname from pg_class c,
+pg_namespace n where c.oid in (select attrelid from pg_attribute where
+attname like '_Slony-I_%rowID' and not attisdropped) and reltype &lt;&gt; 0
+and n.oid = c.relnamespace order by n.nspname, c.relname; </command>
+</para>
+
+<para> The simplest approach that may be taken to rectify the
+<quote>broken</quote> state of such tables is as follows: </para>
+
+<itemizedlist>
+
+<listitem><para> Drop the table from replication using the &lslonik;
+command <xref linkend="stmtsetdroptable"/>. </para>
+
+<para> This does <emphasis>not</emphasis> drop out the
+&slony1;-generated column. </para>
+</listitem>
+
+<listitem><para> On each node, run an SQL script to alter the table,
+dropping the extra column.</para> <para> <command> alter table
+whatever drop column "_Slony-I_cluster-rowID";</command> </para>
+
+<para> This needs to be run individually against each node.  Depending
+on your preferences, you might wish to use <xref
+linkend="stmtddlscript"/> to do this. </para>
+
+<para> If the table is a heavily updated one, it is worth observing
+that this alteration will require acquiring an exclusive lock on the
+table.  It will not hold this lock for terribly long; dropping the
+column should be quite a rapid operation as all it does internally is
+to mark the column as being dropped; it <emphasis>does not</emphasis>
+require rewriting the entire contents of the table.  Tuples that have
+values in that column will continue to have that value; new tuples
+will leave it NULL, and queries will ignore the column.  Space for
+those columns will get reclaimed as tuples get updated.  </para>
+
+<para> Note that at this point in the process, this table is not being
+replicated.  If a failure takes place, replication is not, at this
+point, providing protection on this table.  This is unfortunate but
+unavoidable. </para>
+</listitem>
+
+<listitem><para> Make sure the table has a legitimate candidate for
+primary key, some set of NOT NULL, UNIQUE columns.  </para>
+
+<para> The possible variations to this are the reason that the
+developers have made no effort to try to assist automation of
+this.</para></listitem>
+</itemizedlist>
+
+<itemizedlist>
+
+<listitem><para> If the table is a small one, it may be perfectly
+reasonable to do alterations (note that they must be applied to
+<emphasis>every node</emphasis>!) to add a new column, assign it via a
+new sequence, and then declare it to be a primary key.  </para>
+
+<para> If there are only a few tuples, this should take a fraction of
+a second, and, with luck, be unnoticeable to a running
+application. </para>
+
+<para> Even if the table is fairly large, if it is not frequently
+accessed by the application, the locking of the table that takes place
+when you run <command>ALTER TABLE</command> may not cause much
+inconvenience. </para></listitem>
+
+<listitem> <para> If the table is a large one, and is vital to and
+heavily accessed by the application, then it may be necessary to take
+an application outage in order to accomplish the alterations, leaving
+you necessarily somewhat vulnerable until the process is
+complete. </para>
+
+<para> If it is troublesome to take outages, then the upgrade to
+&slony1; version 2 may take some planning... </para>
+</listitem>
+
+</itemizedlist>
+
+<itemizedlist>
+
+<listitem><para> Create a new replication set (<xref
+linkend="stmtcreateset"/>) and re-add the table to that set (<xref
+linkend="stmtsetaddtable"/>).  </para>
+
+<para> If there are multiple tables, they may be handled via a single
+replication set.</para>
+</listitem>
+
+<listitem><para> Subscribe the set (<xref linkend="stmtsubscribeset"/>)
+on all the nodes desired. </para> </listitem>
+
+<listitem><para> Once subscriptions are complete, merge the set(s) in,
+if desired (<xref linkend="stmtmergeset"/>). </para> </listitem>
+
+</itemizedlist>
+
+<para> This approach should be fine for tables that are relatively
+small, or infrequently used.  If, on the other hand, the table is
+large and heavily used, another approach may prove necessary, namely
+to create your own sequence, and <quote>promote</quote> the formerly
+&slony1;-generated column into a <quote>real</quote> column in your
+database schema.  An outline of the steps is as follows: </para>
+
+<itemizedlist>
+
+<listitem><para> Add a sequence that assigns values to the
+column. </para>
+
+<para> Setup steps will include SQL <command>CREATE
+SEQUENCE</command>, SQL <command>SELECT SETVAL()</command> (to set the
+value of the sequence high enough to reflect values used in the
+table), Slonik <xref linkend="stmtcreateset"/> (to create a set to
+assign the sequence to), Slonik <xref linkend="stmtsetaddsequence"/>
+(to assign the sequence to the set), Slonik <xref
+linkend="stmtsubscribeset"/> (to set up subscriptions to the new
+set)</para>
+</listitem>
+
+<listitem><para> Attach the sequence to the column on the
+table. </para>
+
+<para> This involves <command>ALTER TABLE ALTER COLUMN</command>,
+which must be submitted via the Slonik command <xref
+linkend="stmtddlscript"/>. </para>
+</listitem>
+
+<listitem><para> Rename the column
+<envar>_Slony-I_ at CLUSTERNAME@_rowID</envar> so that &slony1; won't
+consider it to be under its control.</para>
+
+<para> This involves <command>ALTER TABLE ALTER COLUMN</command>,
+which must be submitted via the Slonik command <xref
+linkend="stmtddlscript"/>. </para>
+
+<para> Note that these two alterations might be accomplished via the
+same <xref linkend="stmtddlscript"/> request. </para>
+</listitem>
+
+</itemizedlist>
+
+</sect2>
+
+<sect2> <title> New Trigger Handling in &slony1; Version 2 </title>
+
+<para> One of the major changes to &slony1; is that enabling/disabling
+of triggers and rules now takes place as plain SQL, supported by
+&postgres; 8.3+, rather than via <quote>hacking</quote> on the system
+catalog. </para>
+
+<para> As a result, &slony1; users should be aware of the &postgres;
+syntax for <command>ALTER TABLE</command>, as that is how they can
+accomplish what was formerly accomplished via <xref
+linkend="stmtstoretrigger"/> and <xref linkend="stmtdroptrigger"/>. </para>
+
+</sect2>
+
 </sect1>

Modified: traduc/branches/slony_1_2/subscribenodes.xml
===================================================================
--- traduc/branches/slony_1_2/subscribenodes.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/subscribenodes.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -122,7 +122,7 @@
     voir une erreur similaire celle-ci&nbsp;:
   </para>
 
-  <screen>2005-04-13 07:11:28 PDT ERROR remoteWorkerThread_11: "declare LOG
+  <screen>2007-04-13 07:11:28 PDT ERROR remoteWorkerThread_11: "declare LOG
 cursor for select log_origin, log_xid, log_tableid, log_actionseq,
 log_cmdtype, log_cmddata from "_T1".sl_log_1 where log_origin = 11 and
 ( order by log_actionseq; " PGRES_FATAL_ERROR ERROR: syntax error at

Modified: traduc/branches/slony_1_2/supportedplatforms.xml
===================================================================
--- traduc/branches/slony_1_2/supportedplatforms.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/supportedplatforms.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -14,7 +14,7 @@
   sur ces plate-formes.
 </para>
 
-<para>Dernière mise à jour&nbsp;: 17 novembre 2006</para>
+<para>Dernière mise à jour&nbsp;: 23 juin 2005</para>
 
 <para>
   Si vous rencontrez des problèmes sur une de ces plate-formes, s'il-vous-plait,
@@ -147,33 +147,6 @@
      </row>
 
      <row>
-      <entry>Fedora Core</entry>
-      <entry>5</entry>
-      <entry>x86</entry>
-      <entry>Nov 17, 2006</entry>
-      <entry>devrim at CommandPrompt.com</entry>
-      <entry>&postgres; Version: 8.1.5</entry>
-     </row>
-
-     <row>
-      <entry>Fedora Core</entry>
-      <entry>6</entry>
-      <entry>x86</entry>
-      <entry>Nov 17, 2006</entry>
-      <entry>devrim at CommandPrompt.com</entry>
-      <entry>&postgres; Version: 8.1.5</entry>
-     </row>
-
-     <row>
-      <entry>Fedora Core</entry>
-      <entry>6</entry>
-      <entry>x86_64</entry>
-      <entry>Nov 17, 2006</entry>
-      <entry>devrim at CommandPrompt.com</entry>
-      <entry>&postgres; Version: 8.1.5</entry>
-     </row>
-
-     <row>
       <entry>Red Hat Linux</entry>
       <entry>9</entry>
       <entry>x86</entry>

Modified: traduc/branches/slony_1_2/testbed.xml
===================================================================
--- traduc/branches/slony_1_2/testbed.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/testbed.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -226,6 +226,19 @@
   </glossentry>
 
   <glossentry>
+    <glossterm><envar>TMPDIR</envar></glossterm>
+
+    <glossdef>
+      <para>
+        By default, the tests will generate their output in
+        <filename>/tmp</filename>, <filename>/usr/tmp</filename>, or
+        <filename>/var/tmp</filename>, unless you set your own value for this
+        environment variable.
+      </para>
+    </glossdef>
+  </glossentry>
+
+  <glossentry>
     <glossterm><envar>SLTOOLDIR</envar></glossterm>
       
     <glossdef>
@@ -264,6 +277,45 @@
       </para>
     </glossdef>  
   </glossentry>
+
+  <glossentry>
+    <glossterm><envar>SLONCONF[n]</envar></glossterm>
+
+<glossdef><para> If set to <quote>true</quote>, for a particular node,
+typically handled in <filename>settings.ik</filename> for a given
+test, then configuration will be set up in a <link
+linkend="runtime-config"> per-node <filename>slon.conf</filename>
+runtime config file. </link> </para> </glossdef>
+</glossentry>
+
+<glossentry>
+<glossterm><envar>SLONYTESTER</envar></glossterm>
+
+<glossdef><para> Email address of the person who might be
+contacted about the test results. This is stored in the
+<envar>SLONYTESTFILE</envar>, and may eventually be aggregated in some
+sort of buildfarm-like registry. </para> </glossdef>
+</glossentry>
+
+<glossentry>
+<glossterm><envar>SLONYTESTFILE</envar></glossterm>
+
+<glossdef><para> File in which to store summary results from tests.
+Eventually, this may be used to construct a buildfarm-like repository of
+aggregated test results. </para> </glossdef>
+</glossentry>
+
+<glossentry>
+<glossterm><filename>random_number</filename> and <filename>random_string</filename> </glossterm>
+
+<glossdef><para> If you run <command>make</command> in the
+<filename>test</filename> directory, C programs
+<application>random_number</application> and
+<application>random_string</application> will be built which will then
+be used when generating random data in lieu of using shell/SQL
+capabilities that are much slower than the C programs.  </para>
+</glossdef>
+</glossentry>
 </glosslist>
 
 <para>

Modified: traduc/branches/slony_1_2/usingslonik.xml
===================================================================
--- traduc/branches/slony_1_2/usingslonik.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/usingslonik.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -149,14 +149,6 @@
 	node 2 admin conninfo = 'dbname=$DB2';
 
 	try {
-		table add key (node id = 1, fully qualified name = 
-                               'public.history');
-	}
-	on error {
-		exit 1;
-	}
-
-	try {
 		create set (id = 1, origin = 1, comment = 
                             'Set 1 - pgbench tables');
 		set add table (set id = 1, origin = 1,
@@ -170,7 +162,7 @@
 			comment = 'Table tellers');
 		set add table (set id = 1, origin = 1,
 			id = 4, fully qualified name = 'public.history',
-			key = serial, comment = 'Table accounts');
+			comment = 'Table accounts');
 	}
 	on error {
 		exit 1;
@@ -213,12 +205,6 @@
 slonik <<_EOF_
 $PREAMBULE
 try {
-    table add key (node id = $origin, fully qualified name = 
-                   'public.history');
-} on error {
-    exit 1;
-}
-try {
 	create set (id = $mainset, origin = $origin, 
                     comment = 'Set $mainset - pgbench tables');
 	set add table (set id = $mainset, origin = $origin,
@@ -232,7 +218,7 @@
 		comment = 'Table tellers');
 	set add table (set id = $mainset, origin = $origin,
 		id = 4, fully qualified name = 'public.history',
-		key = serial, comment = 'Table accounts');
+		comment = 'Table accounts');
 } on error {
 	exit 1;
 }
@@ -264,12 +250,6 @@
 slonik <<_EOF_
 $PREAMBULE
 try {
-    table add key (node id = $origin, fully qualified name = 
-                   'public.history');
-} on error {
-    exit 1;
-}
-try {
 	create set (id = $mainset, origin = $origin, 
                     comment = 'Set $mainset - pgbench tables');
 $ADDTABLES

Modified: traduc/branches/slony_1_2/version.xml
===================================================================
--- traduc/branches/slony_1_2/version.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/version.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -1,2 +1,2 @@
-<!ENTITY version "1.2.15">
+<!ENTITY version "1.2.16">
 <!ENTITY majorversion "1.2">

Modified: traduc/branches/slony_1_2/versionupgrade.xml
===================================================================
--- traduc/branches/slony_1_2/versionupgrade.xml	2009-05-22 21:03:30 UTC (rev 1328)
+++ traduc/branches/slony_1_2/versionupgrade.xml	2009-05-23 12:31:51 UTC (rev 1329)
@@ -56,7 +56,7 @@
 </para>  
 
 <para>
-  Notez que cette opération a provoqué une coupure de service de 40 heures.
+  Notez que cette approche a provoqué une coupure de service de 40 heures.
 </para>
 
 <para>
@@ -64,12 +64,14 @@
   par une autre de quelques minutes, voire quelques secondes. Cette approche
   consiste à créer un réplicat &slony1; utilisant la nouvelle version de
   &postgres;. Il est possible que cela prenne plus de 40 heures pour créer ce
-  réplicat mais, une fois qu'il est créé, il peut être rafraîchi rapidement.
+  réplicat. Néanmoins, la création du réplicat ne nécessite aucun arrêt de la
+  production et, une fois qu'il est créé, il peut être rafraîchi rapidement.
 </para>  
   
 <para>
-  Au moment de basculer vers la nouvelle base de données, la procédure est
-  beaucoup moins longue&nbsp;:
+  Au moment de basculer vers la nouvelle base de données, la portion de la
+  procédure qui nécessite un arrêt de l'application est beaucoup moins
+  longue&nbsp;:
 
   <itemizedlist>
     <listitem>
@@ -104,10 +106,11 @@
 
 <para>
   Cette procédure devrait prendre un temps très court, qui dépendra
-  principalement de votre rapidité lors de la reconfiguration de vos
-  applications. Si vous pouvez souhaiter automatiser toutes ces étapes, il
-  est possible que cela prenne moins d'une seconde. Sinon il est probable
-  que cela prenne entre quelques secondes et quelques minutes.
+  principalement based more on how much time is required to reconfigure your
+  applications than anything else.  If you can automate all of these
+  steps, the outage may conceivably be a second or less.  If manual
+  handling is necessary, then it is likely to take somewhere between a
+  few seconds and a few minutes.
 </para>
 
 <para>
@@ -145,6 +148,13 @@
         Ainsi, vous avez <emphasis>trois</emphasis> n&oelig;uds, un avec la
 	nouvelle version de &postgres;, et deux autres avec l'ancienne version.
       </para>
+
+      <para>
+        Note that this imposes a need to have &slony1; built against
+        <emphasis>both</emphasis> databases (<emphasis>e.g.</emphasis> - at
+        the very least, the binaries for the stored procedures need to have
+        been compiled against both versions of &postgres;).
+      </para>
     </listitem>
 
     <listitem>
@@ -205,11 +215,426 @@
     &postgres; a été <emphasis>considérablement</emphasis> amélioré depuis la
     version 7.2), cependant cette solution était plus pratique pour lui que
     les autres systèmes de réplication tels que
-    <productname>eRServer</productname>. Si vous recherchez désespérement ce
-    type de solution, contactez-le sur la liste des hackers de &postgres;. Il
-    n'est pas prévu que la version 7.2 de &postgres; soit supportée par une
-    version officielle de &slony1;
+    <productname>eRServer</productname>. La version 7.2 de &postgres; ne sera
+    jamais supportée par une version officielle de &slony1;
   </para>
 </note>
 
+<sect2> <title>Example: Upgrading a single database with no existing replication </title>
+
+<para>This example shows names, IP addresses, ports, etc to describe
+in detail what is going on</para>
+
+   <sect3>
+    <title>The Environment</title>
+    <programlisting>
+		Database machine:
+			name = rome 
+			ip = 192.168.1.23
+			OS: Ubuntu 6.06 LTS
+			postgres user = postgres, group postgres
+			
+		Current PostgreSQL 
+			Version = 8.2.3 
+			Port 5432
+			Installed at: /data/pgsql-8.2.3
+			Data directory: /data/pgsql-8.2.3/data
+			Database to be moved: mydb
+			
+		New PostgreSQL installation
+			Version = 8.3.3
+			Port 5433
+			Installed at: /data/pgsql-8.3.3
+			Data directory: /data/pgsql-8.3.3/data
+			
+		Slony Version to be used = 1.2.14
+    </programlisting>
+   </sect3>
+   <sect3>
+    <title>Installing &slony1;</title>
+
+    <para>
+     How to install &slony1; is covered quite well in other parts of
+     the documentation (<xref linkend="installation"/>); we will just
+     provide a quick guide here.</para>
+
+      <programlisting>
+       wget http://main.slony.info/downloads/1.2/source/slony1-1.2.14.tar.bz2
+      </programlisting>
+
+      <para> Unpack and build as root with</para>
+      <programlisting>
+		tar xjf slony1-1.2.14.tar.bz2
+		cd slony1-1.2.14
+		./configure --prefix=/data/pgsql-8.2.3 --with-perltools=/data/pgsql-8.2.3/slony --with-pgconfigdir=/data/pgsql-8.2.3/bin
+		make clean
+		make
+		make install
+		chown -R postgres:postgres /data/pgsq-8.2.3 
+		mkdir /var/log/slony
+		chown -R postgres:postgres /var/log/slony
+      </programlisting>
+
+      <para> Then repeat this for the 8.3.3 build.  A very important
+      step is the <command>make clean</command>; it is not so
+      important the first time, but when building the second time, it
+      is essential to clean out the old binaries, otherwise the
+      binaries will not match the &postgres; 8.3.3 build with the
+      result that &slony1; will not work there.  </para>
+
+   </sect3>
+   <sect3>
+    <title>Creating the slon_tools.conf</title>
+
+    <para>
+     The slon_tools.conf is <emphasis>the</emphasis> configuration
+     file. It contain all all the configuration information such as:
+
+     <orderedlist>
+      <listitem>
+       <para>All the nodes and their details (IPs, ports, db, user,
+	password)</para>
+      </listitem>
+      <listitem>
+       <para>All the tables to be replicated</para>
+      </listitem>
+      <listitem>
+       <para>All the sequences to be replicated</para>
+      </listitem>
+      <listitem>
+       <para> How the tables and sequences are arranged in sets</para>
+      </listitem>
+     </orderedlist>
+     </para>
+     <para> Make a copy of
+      <filename>/data/pgsql-8.2.3/etc/slon_tools.conf-sample</filename>
+      to <filename>slon_tools.conf</filename> and open it. The comments
+      in this file are fairly self explanatory. Since this is a one time
+      replication you will generally not need to split into multiple
+      sets. On a production machine running with 500 tables and 100
+      sequences, putting them all in a single set has worked fine.</para>
+      
+      <orderedlist>
+       <para>A few modifications to do:</para>
+       <listitem>
+	<para> In our case we only need 2 nodes so delete the <command>add_node</command>
+	 for 3 and 4.</para>
+       </listitem>
+       <listitem>
+	<para> <envar>pkeyedtables</envar> entry need to be updated with your tables that
+	 have a primary key. If your tables are spread across multiple
+	 schemas, then you need to qualify the table name with the schema
+	 (schema.tablename)</para>
+       </listitem>
+       <listitem>
+	<para> <envar>keyedtables</envar> entries need to be updated
+	with any tables that match the comment (with good schema
+	design, there should not be any).
+	</para>
+       </listitem>
+       <listitem>
+	<para> <envar>serialtables</envar> (if you have any; as it says, it is wise to avoid this).</para>
+       </listitem>
+       <listitem>
+	<para> <envar>sequences</envar>  needs to be updated with your sequences.
+	</para>
+       </listitem>
+       <listitem>
+	<para>Remove the whole set2 entry (as we are only using set1)</para>
+       </listitem>
+      </orderedlist>
+     <para>
+      This is what it look like with all comments stripped out:
+      <programlisting>
+$CLUSTER_NAME = 'replication';
+$LOGDIR = '/var/log/slony';
+$MASTERNODE = 1;
+
+    add_node(node     => 1,
+	     host     => 'rome',
+	     dbname   => 'mydb',
+	     port     => 5432,
+	     user     => 'postgres',
+         password => '');
+
+    add_node(node     => 2,
+	     host     => 'rome',
+	     dbname   => 'mydb',
+	     port     => 5433,
+	     user     => 'postgres',
+         password => '');
+
+$SLONY_SETS = {
+    "set1" => {
+	"set_id" => 1,
+	"table_id"    => 1,
+	"sequence_id" => 1,
+        "pkeyedtables" => [
+			   'mytable1',
+			   'mytable2',
+			   'otherschema.mytable3',
+			   'otherschema.mytable4',
+			   'otherschema.mytable5',
+			   'mytable6',
+			   'mytable7',
+			   'mytable8',
+			   ],
+
+		"sequences" => [
+			   'mytable1_sequence1',
+   			   'mytable1_sequence2',
+			   'otherschema.mytable3_sequence1',
+   			   'mytable6_sequence1',
+   			   'mytable7_sequence1',
+   			   'mytable7_sequence2',
+			],
+    },
+
+};
+
+1;
+      </programlisting>
+      </para>
+      <para> As can be seen this database is pretty small with only 8
+      tables and 6 sequences. Now copy your
+      <filename>slon_tools.conf</filename> into
+      <filename>/data/pgsql-8.2.3/etc/</filename> and
+      <filename>/data/pgsql-8.3.3/etc/</filename>
+      </para>
+   </sect3>
+   <sect3>
+    <title>Preparing the new &postgres; instance</title>
+    <para> You now have a fresh second instance of &postgres; running on
+     port 5433 on the same machine.  Now is time to prepare to 
+     receive &slony1; replication data.</para>
+    <orderedlist>
+     <listitem>
+      <para>Slony does not replicate roles, so first create all the
+       users on the new instance so it is identical in terms of
+       roles/groups</para>
+     </listitem>
+     <listitem>
+      <para>
+       Create your db in the same encoding as original db, in my case
+       UTF8
+       <command>/data/pgsql-8.3.3/bin/createdb
+	-E UNICODE -p5433 mydb</command>
+      </para>
+     </listitem>
+     <listitem>
+      <para>
+       &slony1; replicates data, not schemas, so take a dump of your schema
+       <command>/data/pgsql-8.2.3/bin/pg_dump
+	-s mydb > /tmp/mydb.schema</command>
+       and then import it on the new instance.
+       <command>cat /tmp/mydb.schema | /data/pgsql-8.3.3/bin/psql -p5433
+	mydb</command>
+      </para>
+     </listitem>
+    </orderedlist>
+
+    <para>The new database is now ready to start receiving replication
+    data</para>
+
+   </sect3>
+   <sect3>
+    <title>Initiating &slony1; Replication</title>
+    <para>This is the point where we start changing your current
+     production database by adding a new schema to it that  contains
+     all the &slony1; replication information</para>
+    <para>The first thing to do is to initialize the &slony1;
+     schema.  Do the following as, in the example, the  postgres user.</para>
+    <note>
+     <para> All commands starting with <command>slonik</command> does not do anything
+      themself they only generate command output that can be interpreted
+      by the slonik binary. So issuing any of the scripts starting with
+      slonik_ will not do anything to your database. Also by default the
+      slonik_ scripts will look for your slon_tools.conf in your etc
+      directory of the postgresSQL directory. In my case
+      <filename>/data/pgsql-8.x.x/etc</filename> depending on which you are working on.</para>
+    </note>
+    <para>
+     <command>/data/pgsql-8.2.3/slony/slonik_init_cluster
+      > /tmp/init.txt</command>
+    </para>
+    <para>open /tmp/init.txt and it should look like something like
+     this</para>
+    <programlisting>
+# INIT CLUSTER
+cluster name = replication;
+ node 1 admin conninfo='host=rome dbname=mydb user=postgres port=5432';
+ node 2 admin conninfo='host=rome dbname=mydb user=postgres port=5433';
+  init cluster (id = 1, comment = 'Node 1 - mydb at rome');
+
+# STORE NODE
+  store node (id = 2, event node = 1, comment = 'Node 2 - mydb at rome');
+  echo 'Set up replication nodes';
+
+# STORE PATH
+  echo 'Next: configure paths for each node/origin';
+  store path (server = 1, client = 2, conninfo = 'host=rome dbname=mydb user=postgres port=5432');
+  store path (server = 2, client = 1, conninfo = 'host=rome dbname=mydb user=postgres port=5433');
+  echo 'Replication nodes prepared';
+  echo 'Please start a slon replication daemon for each node';
+     
+    </programlisting>
+    <para>The first section indicates node information and the
+    initialization of the cluster, then it adds the second node to the
+    cluster and finally stores communications paths for both nodes in
+    the slony schema.</para>
+    <para>
+     Now is time to execute the command:
+     <command>cat /tmp/init.txt | /data/pgsql-8.2.3/bin/slonik</command>
+    </para>
+    <para>This will run pretty quickly and give you some output to
+    indicate success.</para>
+    <para>
+     If things do fail, the most likely reasons would be database
+     permissions, <filename>pg_hba.conf</filename> settings, or typos
+     in <filename>slon_tools.conf</filename>. Look over your problem
+     and solve it.  If slony schemas were created but it still failed
+     you can issue the script <command>slonik_uninstall_nodes</command> to
+     clean things up.  In the worst case you may connect to each
+     database and issue <command>drop schema _replication cascade;</command>
+     to clean up.
+    </para>
+   </sect3>
+   <sect3>
+    <title>The slon daemon</title>
+
+    <para>As the result from the last command told us, we should now
+    be starting a slon replication daemon for each node! The slon
+    daemon is what makes the replication work. All transfers and all
+    work is done by the slon daemon. One is needed for each node. So
+    in our case we need one for the 8.2.3 installation and one for the
+    8.3.3.</para>
+
+    <para> to start one for 8.2.3 you would do:
+    <command>/data/pgsql-8.2.3/slony/slon_start 1 --nowatchdog</command>
+    This would start the daemon for node 1, the --nowatchdog since we
+    are running a very small replication we do not need any watchdogs
+    that keep an eye on the slon process if it stays up etc.  </para>
+
+    <para>if it says started successfully have a look in the log file
+     at /var/log/slony/slony1/node1/ It will show that the process was
+     started ok</para>
+
+    <para> We need to start one for 8.3.3 as well.  <command>
+    <command>/data/pgsql-8.3.3/slony/slon_start 2 --nowatchdog</command>
+    </command> </para>
+
+    <para>If it says it started successfully have a look in the log
+    file at /var/log/slony/slony1/node2/ It will show that the process
+    was started ok</para>
+   </sect3>
+   <sect3>
+    <title>Adding the replication set</title>
+    <para>We now need to let the slon replication know which tables and
+     sequences it is to replicate. We need to create the set.</para>
+    <para>
+     Issue the following:
+     <command>/data/pgsql-8.2.3/slony/slonik_create_set
+      set1 > /tmp/createset.txt</command>
+    </para>
+
+    <para> <filename> /tmp/createset.txt</filename> may be quite lengthy depending on how
+     many tables; in any case, take a quick look and it should make sense as it
+     defines all the tables and sequences to be replicated</para>
+
+    <para>
+     If you are happy with the result send the file to the slonik for
+     execution
+     <command>cat /tmp/createset.txt | /data/pgsql-8.2.3/bin/slonik
+     </command>
+     You will see quite a lot rolling by, one entry for each table.
+    </para>
+    <para>You now have defined what is to be replicated</para>
+   </sect3>
+   <sect3>
+    <title>Subscribing all the data</title>
+    <para>
+     The final step is to get all the data onto the new database. It is
+     simply done using the subscribe script.
+     <command>data/pgsql-8.2.3/slony/slonik_subscribe_set
+      1 2 > /tmp/subscribe.txt</command>
+     the first is the ID of the set, second is which node that is to
+     subscribe.
+    </para>
+    <para>
+     will look something like this:
+     <programlisting>
+ cluster name = replication;
+ node 1 admin conninfo='host=rome dbname=mydb user=postgres port=5432';
+ node 2 admin conninfo='host=rome dbname=mydb user=postgres port=5433';
+  try {
+    subscribe set (id = 1, provider = 1, receiver = 2, forward = yes);
+  }
+  on error {
+    exit 1;
+  }
+  echo 'Subscribed nodes to set 1';
+     </programlisting>
+     send it to the slonik
+     <command>cat /tmp/subscribe.txt | /data/pgsql-8.2.3/bin/slonik
+     </command>
+    </para>
+    <para>The replication will now start. It will copy everything in
+     tables/sequneces that were in the set. understandable this can take
+     quite some time, all depending on the size of db and power of the
+     machine.</para>
+    <para>
+     One way to keep track of the progress would be to do the following:
+     <command>tail -f /var/log/slony/slony1/node2/log | grep -i copy
+     </command>
+     The slony logging is pretty verbose and doing it this way will let
+     you know how the copying is going. At some point it will say "copy
+     completed sucessfully in xxx seconds" when you do get this it is
+     done!
+    </para>
+    <para>Once this is done it will start trying to catch up with all
+     data that has come in since the replication was started. You can
+     easily view the progress of this in the database. Go to the master
+     database, in the replication schema there is a view called
+     sl_status. It is pretty self explanatory. The field of most interest
+     is the "st_lag_num_events" this declare how many slony events behind
+     the node is. 0 is best. but it all depends how active your db is.
+     The field next to it st_lag_time is an estimation how much in time
+     it is lagging behind. Take this with a grain of salt. The actual
+     events is a more accurate messure of lag.</para>
+    <para>You now have a fully replicated database</para>
+   </sect3>
+   <sect3>
+    <title>Switching over</title>
+    <para>Our database is fully replicated and its keeping up. There
+     are few different options for doing the actual switch over it all
+     depends on how much time you got to work with, down time vs. data
+     loss ratio. the most brute force fast way of doing it would be
+    </para>
+    <orderedlist>
+     <listitem>
+      <para>First modify the postgresql.conf file for the 8.3.3 to
+       use port 5432 so that it is ready for the restart</para>
+     </listitem>
+     <listitem>
+      <para>From this point you will have down time. shutdown the
+       8.2.3 postgreSQL installation</para>
+     </listitem>
+     <listitem>
+      <para>restart the 8.3.3 postgreSQL installation. It should
+       come up ok.</para>
+     </listitem>
+     <listitem>
+      <para>
+       drop all the slony stuff from the 8.3.3 installation login psql to
+       the 8.3.3 and issue
+       <command>drop schema _replication cascade;</command>
+      </para>
+     </listitem>
+    </orderedlist>
+    <para>You have now upgraded to 8.3.3 with, hopefully, minimal down
+    time. This procedure represents roughly the simplest way to do
+    this.</para>
+   </sect3>
+  </sect2>
+
 </sect1>



Plus d'informations sur la liste de diffusion Trad