summaryrefslogtreecommitdiff
path: root/NEWS.txt
blob: 2b8fac737be447dba8cf22570b00709edd53001e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
= 4.0.4 () =

* Fixed a bug that sometimes created disconnected trees.

* Fixed a bug with the string setter that moved a string around the
  tree instead of copying it. [bug=983050]

* Attribute values are now run through the provided output formatter.
  Previously they were always run through the 'minimal' formatter. In
  the future I may make it possible to specify different formatters
  for attribute values and strings, but for now, consistent behavior
  is better than inconsistent behavior. [bug=980237]

* Added the missing renderContents method from Beautiful Soup 3. Also
  added an encode_contents() method to go along with decode_contents().

* Give a more useful error when the user tries to run the Python 2
  version of BS under Python 3.

* UnicodeDammit can now convert Microsoft smart quotes to ASCII with
  UnicodeDammit(markup, smart_quotes_to="ascii").

= 4.0.3 (20120403) =

* Fixed a typo that caused some versions of Python 3 to convert the
  Beautiful Soup codebase incorrectly.

* Got rid of the 4.0.2 workaround for HTML documents--it was
  unnecessary and the workaround was triggering a (possibly different,
  but related) bug in lxml. [bug=972466]

= 4.0.2 (20120326) =

* Worked around a possible bug in lxml that prevents non-tiny XML
  documents from being parsed. [bug=963880, bug=963936]

* Fixed a bug where specifying `text` while also searching for a tag
  only worked if `text` wanted an exact string match. [bug=955942]

= 4.0.1 (20120314) =

* This is the first official release of Beautiful Soup 4. There is no
  4.0.0 release, to eliminate any possibility that packaging software
  might treat "4.0.0" as being an earlier version than "4.0.0b10".

* Brought BS up to date with the latest release of soupselect, adding
  CSS selector support for direct descendant matches and multiple CSS
  class matches.

= 4.0.0b10 (20120302) =

* Added support for simple CSS selectors, taken from the soupselect project.

* Fixed a crash when using html5lib. [bug=943246]

* In HTML5-style <meta charset="foo"> tags, the value of the "charset"
  attribute is now replaced with the appropriate encoding on
  output. [bug=942714]

* Fixed a bug that caused calling a tag to sometimes call find_all()
  with the wrong arguments. [bug=944426]

* For backwards compatibility, brought back the BeautifulStoneSoup
  class as a deprecated wrapper around BeautifulSoup.

= 4.0.0b9 (20120228) =

* Fixed the string representation of DOCTYPEs that have both a public
  ID and a system ID.

* Fixed the generated XML declaration.

* Renamed Tag.nsprefix to Tag.prefix, for consistency with
  NamespacedAttribute.

* Fixed a test failure that occured on Python 3.x when chardet was
  installed.

* Made prettify() return Unicode by default, so it will look nice on
  Python 3 when passed into print().

= 4.0.0b8 (20120224) =

* All tree builders now preserve namespace information in the
  documents they parse. If you use the html5lib parser or lxml's XML
  parser, you can access the namespace URL for a tag as tag.namespace.

  However, there is no special support for namespace-oriented
  searching or tree manipulation. When you search the tree, you need
  to use namespace prefixes exactly as they're used in the original
  document.

* The string representation of a DOCTYPE always ends in a newline.

* Issue a warning if the user tries to use a SoupStrainer in
  conjunction with the html5lib tree builder, which doesn't support
  them.

= 4.0.0b7 (20120223) =

* Upon decoding to string, any characters that can't be represented in
  your chosen encoding will be converted into numeric XML entity
  references.

* Issue a warning if characters were replaced with REPLACEMENT
  CHARACTER during Unicode conversion.

* Restored compatibility with Python 2.6.

* The install process no longer installs docs or auxillary text files.

* It's now possible to deepcopy a BeautifulSoup object created with
  Python's built-in HTML parser.

* About 100 unit tests that "test" the behavior of various parsers on
  invalid markup have been removed. Legitimate changes to those
  parsers caused these tests to fail, indicating that perhaps
  Beautiful Soup should not test the behavior of foreign
  libraries.

  The problematic unit tests have been reformulated as informational
  comparisons generated by the script
  scripts/demonstrate_parser_differences.py.

  This makes Beautiful Soup compatible with html5lib version 0.95 and
  future versions of HTMLParser.

= 4.0.0b6 (20120216) =

* Multi-valued attributes like "class" always have a list of values,
  even if there's only one value in the list.

* Added a number of multi-valued attributes defined in HTML5.

* Stopped generating a space before the slash that closes an
  empty-element tag. This may come back if I add a special XHTML mode
  (http://www.w3.org/TR/xhtml1/#C_2), but right now it's pretty
  useless.

* Passing text along with tag-specific arguments to a find* method:

   find("a", text="Click here")

  will find tags that contain the given text as their
  .string. Previously, the tag-specific arguments were ignored and
  only strings were searched.

* Fixed a bug that caused the html5lib tree builder to build a
  partially disconnected tree. Generally cleaned up the html5lib tree
  builder.

* If you restrict a multi-valued attribute like "class" to a string
  that contains spaces, Beautiful Soup will only consider it a match
  if the values correspond to that specific string.

= 4.0.0b5 (20120209) =

* Rationalized Beautiful Soup's treatment of CSS class. A tag
  belonging to multiple CSS classes is treated as having a list of
  values for the 'class' attribute. Searching for a CSS class will
  match *any* of the CSS classes.

  This actually affects all attributes that the HTML standard defines
  as taking multiple values (class, rel, rev, archive, accept-charset,
  and headers), but 'class' is by far the most common. [bug=41034]

* If you pass anything other than a dictionary as the second argument
  to one of the find* methods, it'll assume you want to use that
  object to search against a tag's CSS classes. Previously this only
  worked if you passed in a string.

* Fixed a bug that caused a crash when you passed a dictionary as an
  attribute value (possibly because you mistyped "attrs"). [bug=842419]

* Unicode, Dammit now detects the encoding in HTML 5-style <meta> tags
  like <meta charset="utf-8" />. [bug=837268]

* If Unicode, Dammit can't figure out a consistent encoding for a
  page, it will try each of its guesses again, with errors="replace"
  instead of errors="strict". This may mean that some data gets
  replaced with REPLACEMENT CHARACTER, but at least most of it will
  get turned into Unicode. [bug=754903]

* Patched over a bug in html5lib (?) that was crashing Beautiful Soup
  on certain kinds of markup. [bug=838800]

* Fixed a bug that wrecked the tree if you replaced an element with an
  empty string. [bug=728697]

* Improved Unicode, Dammit's behavior when you give it Unicode to
  begin with.

= 4.0.0b4 (20120208) =

* Added BeautifulSoup.new_string() to go along with BeautifulSoup.new_tag()

* BeautifulSoup.new_tag() will follow the rules of whatever
  tree-builder was used to create the original BeautifulSoup object. A
  new <p> tag will look like "<p />" if the soup object was created to
  parse XML, but it will look like "<p></p>" if the soup object was
  created to parse HTML.

* We pass in strict=False to html.parser on Python 3, greatly
  improving html.parser's ability to handle bad HTML.

* We also monkeypatch a serious bug in html.parser that made
  strict=False disastrous on Python 3.2.2.

* Replaced the "substitute_html_entities" argument with the
  more general "formatter" argument.

* Bare ampersands and angle brackets are always converted to XML
  entities unless the user prevents it.

* Added PageElement.insert_before() and PageElement.insert_after(),
  which let you put an element into the parse tree with respect to
  some other element.

* Raise an exception when the user tries to do something nonsensical
  like insert a tag into itself.


= 4.0.0b3 (20120203) =

Beautiful Soup 4 is a nearly-complete rewrite that removes Beautiful
Soup's custom HTML parser in favor of a system that lets you write a
little glue code and plug in any HTML or XML parser you want.

Beautiful Soup 4.0 comes with glue code for four parsers:

 * Python's standard HTMLParser (html.parser in Python 3)
 * lxml's HTML and XML parsers
 * html5lib's HTML parser

HTMLParser is the default, but I recommend you install lxml if you
can.

For complete documentation, see the Sphinx documentation in
bs4/doc/source/. What follows is a summary of the changes from
Beautiful Soup 3.

=== The module name has changed ===

Previously you imported the BeautifulSoup class from a module also
called BeautifulSoup. To save keystrokes and make it clear which
version of the API is in use, the module is now called 'bs4':

    >>> from bs4 import BeautifulSoup

=== It works with Python 3 ===

Beautiful Soup 3.1.0 worked with Python 3, but the parser it used was
so bad that it barely worked at all. Beautiful Soup 4 works with
Python 3, and since its parser is pluggable, you don't sacrifice
quality.

Special thanks to Thomas Kluyver and Ezio Melotti for getting Python 3
support to the finish line. Ezio Melotti is also to thank for greatly
improving the HTML parser that comes with Python 3.2.

=== CDATA sections are normal text, if they're understood at all. ===

Currently, the lxml and html5lib HTML parsers ignore CDATA sections in
markup:

 <p><![CDATA[foo]]></p> => <p></p>

A future version of html5lib will turn CDATA sections into text nodes,
but only within tags like <svg> and <math>:

 <svg><![CDATA[foo]]></svg> => <p>foo</p>

The default XML parser (which uses lxml behind the scenes) turns CDATA
sections into ordinary text elements:

 <p><![CDATA[foo]]></p> => <p>foo</p>

In theory it's possible to preserve the CDATA sections when using the
XML parser, but I don't see how to get it to work in practice.

=== Miscellaneous other stuff ===

If the BeautifulSoup instance has .is_xml set to True, an appropriate
XML declaration will be emitted when the tree is transformed into a
string:

    <?xml version="1.0" encoding="utf-8">
    <markup>
     ...
    </markup>

The ['lxml', 'xml'] tree builder sets .is_xml to True; the other tree
builders set it to False. If you want to parse XHTML with an HTML
parser, you can set it manually.


= 3.2.0 =

The 3.1 series wasn't very useful, so I renamed the 3.0 series to 3.2
to make it obvious which one you should use.

= 3.1.0 =

A hybrid version that supports 2.4 and can be automatically converted
to run under Python 3.0. There are three backwards-incompatible
changes you should be aware of, but no new features or deliberate
behavior changes.

1. str() may no longer do what you want. This is because the meaning
of str() inverts between Python 2 and 3; in Python 2 it gives you a
byte string, in Python 3 it gives you a Unicode string.

The effect of this is that you can't pass an encoding to .__str__
anymore. Use encode() to get a string and decode() to get Unicode, and
you'll be ready (well, readier) for Python 3.

2. Beautiful Soup is now based on HTMLParser rather than SGMLParser,
which is gone in Python 3. There's some bad HTML that SGMLParser
handled but HTMLParser doesn't, usually to do with attribute values
that aren't closed or have brackets inside them:

  <a href="foo</a>, </a><a href="bar">baz</a>
  <a b="<a>">', '<a b="&lt;a&gt;"></a><a>"></a>

A later version of Beautiful Soup will allow you to plug in different
parsers to make tradeoffs between speed and the ability to handle bad
HTML.

3. In Python 3 (but not Python 2), HTMLParser converts entities within
attributes to the corresponding Unicode characters. In Python 2 it's
possible to parse this string and leave the &eacute; intact.

 <a href="http://crummy.com?sacr&eacute;&bleu">

In Python 3, the &eacute; is always converted to \xe9 during
parsing.


= 3.0.7a =

Added an import that makes BS work in Python 2.3.


= 3.0.7 =

Fixed a UnicodeDecodeError when unpickling documents that contain
non-ASCII characters.

Fixed a TypeError that occured in some circumstances when a tag
contained no text.

Jump through hoops to avoid the use of chardet, which can be extremely
slow in some circumstances. UTF-8 documents should never trigger the
use of chardet.

Whitespace is preserved inside <pre> and <textarea> tags that contain
nothing but whitespace.

Beautiful Soup can now parse a doctype that's scoped to an XML namespace.


= 3.0.6 =

Got rid of a very old debug line that prevented chardet from working.

Added a Tag.decompose() method that completely disconnects a tree or a
subset of a tree, breaking it up into bite-sized pieces that are
easy for the garbage collecter to collect.

Tag.extract() now returns the tag that was extracted.

Tag.findNext() now does something with the keyword arguments you pass
it instead of dropping them on the floor.

Fixed a Unicode conversion bug.

Fixed a bug that garbled some <meta> tags when rewriting them.


= 3.0.5 =

Soup objects can now be pickled, and copied with copy.deepcopy.

Tag.append now works properly on existing BS objects. (It wasn't
originally intended for outside use, but it can be now.) (Giles
Radford)

Passing in a nonexistent encoding will no longer crash the parser on
Python 2.4 (John Nagle).

Fixed an underlying bug in SGMLParser that thinks ASCII has 255
characters instead of 127 (John Nagle).

Entities are converted more consistently to Unicode characters.

Entity references in attribute values are now converted to Unicode
characters when appropriate. Numeric entities are always converted,
because SGMLParser always converts them outside of attribute values.

ALL_ENTITIES happens to just be the XHTML entities, so I renamed it to
XHTML_ENTITIES.

The regular expression for bare ampersands was too loose. In some
cases ampersands were not being escaped. (Sam Ruby?)

Non-breaking spaces and other special Unicode space characters are no
longer folded to ASCII spaces. (Robert Leftwich)

Information inside a TEXTAREA tag is now parsed literally, not as HTML
tags. TEXTAREA now works exactly the same way as SCRIPT. (Zephyr Fang)

= 3.0.4 =

Fixed a bug that crashed Unicode conversion in some cases.

Fixed a bug that prevented UnicodeDammit from being used as a
general-purpose data scrubber.

Fixed some unit test failures when running against Python 2.5.

When considering whether to convert smart quotes, UnicodeDammit now
looks at the original encoding in a case-insensitive way.

= 3.0.3 (20060606) =

Beautiful Soup is now usable as a way to clean up invalid XML/HTML (be
sure to pass in an appropriate value for convertEntities, or XML/HTML
entities might stick around that aren't valid in HTML/XML). The result
may not validate, but it should be good enough to not choke a
real-world XML parser. Specifically, the output of a properly
constructed soup object should always be valid as part of an XML
document, but parts may be missing if they were missing in the
original. As always, if the input is valid XML, the output will also
be valid.

= 3.0.2 (20060602) =

Previously, Beautiful Soup correctly handled attribute values that
contained embedded quotes (sometimes by escaping), but not other kinds
of XML character. Now, it correctly handles or escapes all special XML
characters in attribute values.

I aliased methods to the 2.x names (fetch, find, findText, etc.) for
backwards compatibility purposes. Those names are deprecated and if I
ever do a 4.0 I will remove them. I will, I tell you!

Fixed a bug where the findAll method wasn't passing along any keyword
arguments.

When run from the command line, Beautiful Soup now acts as an HTML
pretty-printer, not an XML pretty-printer.

= 3.0.1 (20060530) =

Reintroduced the "fetch by CSS class" shortcut. I thought keyword
arguments would replace it, but they don't. You can't call soup('a',
class='foo') because class is a Python keyword.

If Beautiful Soup encounters a meta tag that declares the encoding,
but a SoupStrainer tells it not to parse that tag, Beautiful Soup will
no longer try to rewrite the meta tag to mention the new
encoding. Basically, this makes SoupStrainers work in real-world
applications instead of crashing the parser.

= 3.0.0 "Who would not give all else for two p" (20060528) =

This release is not backward-compatible with previous releases. If
you've got code written with a previous version of the library, go
ahead and keep using it, unless one of the features mentioned here
really makes your life easier. Since the library is self-contained,
you can include an old copy of the library in your old applications,
and use the new version for everything else.

The documentation has been rewritten and greatly expanded with many
more examples.

Beautiful Soup autodetects the encoding of a document (or uses the one
you specify), and converts it from its native encoding to
Unicode. Internally, it only deals with Unicode strings. When you
print out the document, it converts to UTF-8 (or another encoding you
specify). [Doc reference]

It's now easy to make large-scale changes to the parse tree without
screwing up the navigation members. The methods are extract,
replaceWith, and insert. [Doc reference. See also Improving Memory
Usage with extract]

Passing True in as an attribute value gives you tags that have any
value for that attribute. You don't have to create a regular
expression. Passing None for an attribute value gives you tags that
don't have that attribute at all.

Tag objects now know whether or not they're self-closing. This avoids
the problem where Beautiful Soup thought that tags like <BR /> were
self-closing even in XML documents. You can customize the self-closing
tags for a parser object by passing them in as a list of
selfClosingTags: you don't have to subclass anymore.

There's a new built-in parser, MinimalSoup, which has most of
BeautifulSoup's HTML-specific rules, but no tag nesting rules. [Doc
reference]

You can use a SoupStrainer to tell Beautiful Soup to parse only part
of a document. This saves time and memory, often making Beautiful Soup
about as fast as a custom-built SGMLParser subclass. [Doc reference,
SoupStrainer reference]

You can (usually) use keyword arguments instead of passing a
dictionary of attributes to a search method. That is, you can replace
soup(args={"id" : "5"}) with soup(id="5"). You can still use args if
(for instance) you need to find an attribute whose name clashes with
the name of an argument to findAll. [Doc reference: **kwargs attrs]

The method names have changed to the better method names used in
Rubyful Soup. Instead of find methods and fetch methods, there are
only find methods. Instead of a scheme where you can't remember which
method finds one element and which one finds them all, we have find
and findAll. In general, if the method name mentions All or a plural
noun (eg. findNextSiblings), then it finds many elements
method. Otherwise, it only finds one element. [Doc reference]

Some of the argument names have been renamed for clarity. For instance
avoidParserProblems is now parserMassage.

Beautiful Soup no longer implements a feed method. You need to pass a
string or a filehandle into the soup constructor, not with feed after
the soup has been created. There is still a feed method, but it's the
feed method implemented by SGMLParser and calling it will bypass
Beautiful Soup and cause problems.

The NavigableText class has been renamed to NavigableString. There is
no NavigableUnicodeString anymore, because every string inside a
Beautiful Soup parse tree is a Unicode string.

findText and fetchText are gone. Just pass a text argument into find
or findAll.

Null was more trouble than it was worth, so I got rid of it. Anything
that used to return Null now returns None.

Special XML constructs like comments and CDATA now have their own
NavigableString subclasses, instead of being treated as oddly-formed
data. If you parse a document that contains CDATA and write it back
out, the CDATA will still be there.

When you're parsing a document, you can get Beautiful Soup to convert
XML or HTML entities into the corresponding Unicode characters. [Doc
reference]

= 2.1.1 (20050918) =

Fixed a serious performance bug in BeautifulStoneSoup which was
causing parsing to be incredibly slow.

Corrected several entities that were previously being incorrectly
translated from Microsoft smart-quote-like characters.

Fixed a bug that was breaking text fetch.

Fixed a bug that crashed the parser when text chunks that look like
HTML tag names showed up within a SCRIPT tag.

THEAD, TBODY, and TFOOT tags are now nestable within TABLE
tags. Nested tables should parse more sensibly now.

BASE is now considered a self-closing tag.

= 2.1.0 "Game, or any other dish?" (20050504) =

Added a wide variety of new search methods which, given a starting
point inside the tree, follow a particular navigation member (like
nextSibling) over and over again, looking for Tag and NavigableText
objects that match certain criteria. The new methods are findNext,
fetchNext, findPrevious, fetchPrevious, findNextSibling,
fetchNextSiblings, findPreviousSibling, fetchPreviousSiblings,
findParent, and fetchParents. All of these use the same basic code
used by first and fetch, so you can pass your weird ways of matching
things into these methods.

The fetch method and its derivatives now accept a limit argument.

You can now pass keyword arguments when calling a Tag object as though
it were a method.

Fixed a bug that caused all hand-created tags to share a single set of
attributes.

= 2.0.3 (20050501) =

Fixed Python 2.2 support for iterators.

Fixed a bug that gave the wrong representation to tags within quote
tags like <script>.

Took some code from Mark Pilgrim that treats CDATA declarations as
data instead of ignoring them.

Beautiful Soup's setup.py will now do an install even if the unit
tests fail. It won't build a source distribution if the unit tests
fail, so I can't release a new version unless they pass.

= 2.0.2 (20050416) =

Added the unit tests in a separate module, and packaged it with
distutils.

Fixed a bug that sometimes caused renderContents() to return a Unicode
string even if there was no Unicode in the original string.

Added the done() method, which closes all of the parser's open
tags. It gets called automatically when you pass in some text to the
constructor of a parser class; otherwise you must call it yourself.

Reinstated some backwards compatibility with 1.x versions: referencing
the string member of a NavigableText object returns the NavigableText
object instead of throwing an error.

= 2.0.1 (20050412) =

Fixed a bug that caused bad results when you tried to reference a tag
name shorter than 3 characters as a member of a Tag, eg. tag.table.td.

Made sure all Tags have the 'hidden' attribute so that an attempt to
access tag.hidden doesn't spawn an attempt to find a tag named
'hidden'.

Fixed a bug in the comparison operator.

= 2.0.0 "Who cares for fish?" (20050410)

Beautiful Soup version 1 was very useful but also pretty stupid. I
originally wrote it without noticing any of the problems inherent in
trying to build a parse tree out of ambiguous HTML tags. This version
solves all of those problems to my satisfaction. It also adds many new
clever things to make up for the removal of the stupid things.

== Parsing ==

The parser logic has been greatly improved, and the BeautifulSoup
class should much more reliably yield a parse tree that looks like
what the page author intended. For a particular class of odd edge
cases that now causes problems, there is a new class,
ICantBelieveItsBeautifulSoup.

By default, Beautiful Soup now performs some cleanup operations on
text before parsing it. This is to avoid common problems with bad
definitions and self-closing tags that crash SGMLParser. You can
provide your own set of cleanup operations, or turn it off
altogether. The cleanup operations include fixing self-closing tags
that don't close, and replacing Microsoft smart quotes and similar
characters with their HTML entity equivalents.

You can now get a pretty-print version of parsed HTML to get a visual
picture of how Beautiful Soup parses it, with the Tag.prettify()
method.

== Strings and Unicode ==

There are separate NavigableText subclasses for ASCII and Unicode
strings. These classes directly subclass the corresponding base data
types. This means you can treat NavigableText objects as strings
instead of having to call methods on them to get the strings.

str() on a Tag always returns a string, and unicode() always returns
Unicode. Previously it was inconsistent.

== Tree traversal ==

In a first() or fetch() call, the tag name or the desired value of an
attribute can now be any of the following:

 * A string (matches that specific tag or that specific attribute value)
 * A list of strings (matches any tag or attribute value in the list)
 * A compiled regular expression object (matches any tag or attribute
   value that matches the regular expression)
 * A callable object that takes the Tag object or attribute value as a
   string. It returns None/false/empty string if the given string
   doesn't match, and any other value if it does.

This is much easier to use than SQL-style wildcards (see, regular
expressions are good for something). Because of this, I took out
SQL-style wildcards. I'll put them back if someone complains, but
their removal simplifies the code a lot.

You can use fetch() and first() to search for text in the parse tree,
not just tags. There are new alias methods fetchText() and firstText()
designed for this purpose. As with searching for tags, you can pass in
a string, a regular expression object, or a method to match your text.

If you pass in something besides a map to the attrs argument of
fetch() or first(), Beautiful Soup will assume you want to match that
thing against the "class" attribute. When you're scraping
well-structured HTML, this makes your code a lot cleaner.

1.x and 2.x both let you call a Tag object as a shorthand for
fetch(). For instance, foo("bar") is a shorthand for
foo.fetch("bar"). In 2.x, you can also access a specially-named member
of a Tag object as a shorthand for first(). For instance, foo.barTag
is a shorthand for foo.first("bar"). By chaining these shortcuts you
traverse a tree in very little code: for header in
soup.bodyTag.pTag.tableTag('th'):

If an element relationship (like parent or next) doesn't apply to a
tag, it'll now show up Null instead of None. first() will also return
Null if you ask it for a nonexistent tag. Null is an object that's
just like None, except you can do whatever you want to it and it'll
give you Null instead of throwing an error.

This lets you do tree traversals like soup.htmlTag.headTag.titleTag
without having to worry if the intermediate stages are actually
there. Previously, if there was no 'head' tag in the document, headTag
in that instance would have been None, and accessing its 'titleTag'
member would have thrown an AttributeError. Now, you can get what you
want when it exists, and get Null when it doesn't, without having to
do a lot of conditionals checking to see if every stage is None.

There are two new relations between page elements: previousSibling and
nextSibling. They reference the previous and next element at the same
level of the parse tree. For instance, if you have HTML like this:

  <p><ul><li>Foo<br /><li>Bar</ul>

The first 'li' tag has a previousSibling of Null and its nextSibling
is the second 'li' tag. The second 'li' tag has a nextSibling of Null
and its previousSibling is the first 'li' tag. The previousSibling of
the 'ul' tag is the first 'p' tag. The nextSibling of 'Foo' is the
'br' tag.

I took out the ability to use fetch() to find tags that have a
specific list of contents. See, I can't even explain it well. It was
really difficult to use, I never used it, and I don't think anyone
else ever used it. To the extent anyone did, they can probably use
fetchText() instead. If it turns out someone needs it I'll think of
another solution.

== Tree manipulation ==

You can add new attributes to a tag, and delete attributes from a
tag. In 1.x you could only change a tag's existing attributes.

== Porting Considerations ==

There are three changes in 2.0 that break old code:

In the post-1.2 release you could pass in a function into fetch(). The
function took a string, the tag name. In 2.0, the function takes the
actual Tag object.

It's no longer to pass in SQL-style wildcards to fetch(). Use a
regular expression instead.

The different parsing algorithm means the parse tree may not be shaped
like you expect. This will only actually affect you if your code uses
one of the affected parts. I haven't run into this problem yet while
porting my code.

= Between 1.2 and 2.0 =

This is the release to get if you want Python 1.5 compatibility.

The desired value of an attribute can now be any of the following:

 * A string
 * A string with SQL-style wildcards
 * A compiled RE object
 * A callable that returns None/false/empty string if the given value
   doesn't match, and any other value otherwise.

This is much easier to use than SQL-style wildcards (see, regular
expressions are good for something). Because of this, I no longer
recommend you use SQL-style wildcards. They may go away in a future
release to clean up the code.

Made Beautiful Soup handle processing instructions as text instead of
ignoring them.

Applied patch from Richie Hindle (richie at entrian dot com) that
makes tag.string a shorthand for tag.contents[0].string when the tag
has only one string-owning child.

Added still more nestable tags. The nestable tags thing won't work in
a lot of cases and needs to be rethought.

Fixed an edge case where searching for "%foo" would match any string
shorter than "foo".

= 1.2 "Who for such dainties would not stoop?" (20040708) =

Applied patch from Ben Last (ben at benlast dot com) that made
Tag.renderContents() correctly handle Unicode.

Made BeautifulStoneSoup even dumber by making it not implicitly close
a tag when another tag of the same type is encountered; only when an
actual closing tag is encountered. This change courtesy of Fuzzy (mike
at pcblokes dot com). BeautifulSoup still works as before.

= 1.1 "Swimming in a hot tureen" =

Added more 'nestable' tags. Changed popping semantics so that when a
nestable tag is encountered, tags are popped up to the previously
encountered nestable tag (of whatever kind). I will revert this if
enough people complain, but it should make more people's lives easier
than harder. This enhancement was suggested by Anthony Baxter (anthony
at interlink dot com dot au).

= 1.0 "So rich and green" (20040420) =

Initial release.