summaryrefslogtreecommitdiffstats
path: root/kate/README.testing
blob: 037271027f8eaae4242443764090a44ebbdc175e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
 Testing Kate
==============

Author: Leo Savernik

Kate contains regression tests to ensure that fixed bugs do not reappear in
newer versions. To facilitate regression testing, a dedicated application
testkateregression will execute the regression tests and compare them to the
expecting results, indicating passed as well as failed testcases.


1. Using testkateregression
  --------------------------

We tried to make regression testing for Kate as easy as possible such that you
can run it before each commit and find out regressions caused by your changes
before they are shipped as part of a release.

Running all regression tests works by simply invoking

	> make check

in your kate build directory. While executing, testkateregression prints a line
for each executed testcase, prefixed with "PASS" if it passed, and "FAIL" if it
failed. Furthermore, testkateregression stores a comprehensive output log under
<katetests-directory>/output/index.html. The output log is invaluable for
determining why a certain testcase failed.

If you invoke testkateregression the first time, it will print instructions on
how to fetch the testsuite and pointing testkateregression to it. This setup
has only to be done once per branch.


2. Discriminating your regressions against existing regressions
  --------------------------------------------------------------

In an ideal universe, all testcases always pass. In this universe, however,
some testcases fail, be it because of anticipating future features not
implemented yet, be it because of nasty bugs which cannot be repaired easily.

This means if you've hacked on kate for quite some while and then fire up
"make check", you are likely to see many failed tests pass by, most of them
*not* caused by your very changes, as they failed already before.

To discriminate the failed tests caused by your changes against the unaffected
failures, testkateregression provides the option --save-failures=<name>, which
runs the regression tests and stores all failures under a failure snapshot
identified by <name>.

The next time you run "make check", testkateregression automatically picks up
the most recently stored failure snapshot and compares the failures and passes
with the one stored in the snapshot. Each failure not listed in the failure
snapshot will be prefixed with "FAIL (new)", indicating that this is a new
failure. Testcases which failed in the snapshot but do pass now are prefixed
with "PASS (new)", indicating that this testcase seems to be fixed now.


3. Using testkateregression efficiently
  --------------------------------------

Therefore, to get the most out of regression testing, we suggest the following
development approach:

   1. Before you change Kate, update and run testkateregression in the part-
      subdirectory.

	> make testkateregression && ./testkateregression --save-failures=last

      This will produce a failure snapshot called "last".
   
   2. Hack on Kate.

   3. Before you commit, run

	> make check

      It will automatically pick up the failure snapshot "last" (provided you
      didn't generate a newer one in the meantime) and compare all results with
      the previously stored ones.
      
      If you inspect <katetests-directory>/output/index.html, the new failures
      are marked red. Those are of interest to you, because they have been
      caused by your changes.
      
      New passes are marked green. These were former failures which started
      working due to your changes.
      
      Goto 2 while there are any new failures.
      
   4. Commit.


4. Invoking testkateregression directly
  --------------------------------------
  
While make check is handy and simple enough for the common case, you might
sometimes need more control over regression testing.

testkateregression features a broad range of options, enabling you to run
dedicated testcases only, specifying an alternate output directory for the
logs, etc.

	> ./testregression --help

will provide you with a complete list of options.


5. Structure of the regression test suite
  ----------------------------------------

Kate's regression testsuite is located in the KDE repository under

	trunk/tests/katetests/regression

and consists of two subdirectories

	baseline
	tests

The latter, tests, contains a directory hierarchy for all testcases to be run
by testkateregression. The former, baseline, contains results as they are
expected by correct operation. Mismatch between the output of a test and its
baseline is considered to be a failure.

Each directory under tests may optionally contain one of the following files.

	.kateconfig
	.kateconfig-commands
	ignore
	KNOWN_FAILURES

.kateconfig: This file works exactly like .kateconfig as supported by the kate
and kwrite editors. It may contain any kate line variable necessary to set up
the testcases proper. Note that .kateconfig files from parent directories are
not merged with .kateconfig files from child directories.

.kateconfig-commands: This file may contain all commands that can be entered by
kate's command line (F7). Each line will be interpreted as one command. To the
contrary of .kateconfig, .kateconfig-commands files are merged with
.kateconfig-commands files from parent directories. Nearer ancestors' commands
take precedence over farther ancestors'.

ignore: This file specifies on each line a file to be ignored in the directory
the ignore-file is located. This enables you to mark any helper files which
otherwise would be interpreted as testcases. Note that hidden files (.*) are
ignored by default, and cannot be "unignored".

KNOWN_FAILURES: This file specifies on each line a file name of a testcase
which is known to fail. Such known failures are counted towards the total count
of failures but they don't cause testkateregression to return a failure code.


6. Structure of a testcase
  -------------------------

A testcase is comprised of a simple plain text file <testcase>.txt which may
be located in any subdirectory under tests. This file contains the *initial*
content the testcase operates on.

Each <testcase>.txt must be accompanied with a <testcase>.txt-script which
contains the actual tests to be performed on the testcase. It consists of
simple JavaScript-statements for direct interfacing with Kate.

Last but not least, a <testcase>.txt-result exists under the baseline
subdirectory, which contains a mirrored directory hierarchy of tests. This very
file contains the expected *result* of the performed tests.


7. Writing a simple testcase
  ---------------------------

Writing your own testcases is easy once you know how to get started. Let's
test how Kate's C-Style indenter fares with indenting after opening braces.

First, we create the new initial content under tests/indent/csmart/openbrc.txt
and fill it with (the dashed lines are not part of the content)
---------------------------

int main() {

---------------------------

Now, we need to write a script performing some actions. We therefore create
a file tests/indent/csmart/openbrc.txt-script and fill it with
---------------------------
v.setCursorPosition(1,12);
v.enter();
v.type("good");
---------------------------

Here, we set the initial cursor position to line 2 (the coordinates are zero-
based) and column 13 which happens to be just after the opening brace. Then
v.enter() simulates pressing the return key in the editor, thus inserting a
new line. v.type simulates typing of the word "good" at the current position
of the cursor.

The options under .kateconfig specify the C-Style indenter to be applied to the
testcases and an indent width of two. With this information, we know what we
expect as a result.

What we are still missing is the expected result itself which we create under
baseline/indent/csmart/openbrc.txt-result and fill it with
---------------------------

int main() {
  good

---------------------------

You can see that "good" is indented by two spaces, even though we didn't
specify those with v.type. We expect from the indenter to provide them for us.

Last but not least we test the testcase by invoking in kate's part directory

	> ./testkateregression indent/csmart/openbrc.txt

and checking whether it works the way we intended it.


7. The JavaScript-interface to the testcases
  -------------------------------------------
  
testkateregression provides you with the following global objects for each
testcase:

	v - object of view
	d - object of document

Each object provides the same methods and fields as the respective JavaScript-
interfaces built in to Kate, like v.setCursorPosition.

Additionally, v provides the following methods unique to testkateregression.

type(<string>)
	Inserts <string> into the current cursor position as if <string> had
	been typed by the keyboard. Contrary to insert(<string>), it will
	trigger indentation and other checks.
enter(), returnKey()
	Inserts a new line as if the return key had been pressed. This will
	trigger special indentation rules.