Data structure for text corpus
A text corpus is usually represented in xml as such:
<corpus name="foobar" date="08.09.13" authors="mememe">
<document filename="br-392">
<paragraph pnumber="1">
<sentence snumber="1">
<word wnumber="1" partofspeech="VB" sensetag="012345678-v"
nameentity="None">Hello</word>
<word wnumber="2" partofspeech="NN" sensetag="876543210-n"
nameentity="World">Foo bar</word>
</sentence>
</paragraph>
</document>
</corpus>
When I try to put a corpus into a database I had each row to represent a
word and the columns are as such:
| uid | corpusname | docfilename | pnumber | snumber | wnumber | token |
pos | sensetag | ne
| 198317 | foobar | br-392 | 1 | 1 | 1 | Hello | VB | 012345678-v | None |
| 192184 | foobar | br-392 | 1 | 1 | 1 | foobar | NN | 87654321-n | World |
I put the data into an sqlite3 database as such:
# I read the xml file and now it's in memory as such.
w1 = (198317,'foobar','br-392',1,1,1,'hello','VB','12345678-n','Hello')
w2 = (192184,'foobar','br-392',1,1,1,'foobar','NN','87654321-n','World')
con = sqlite3.connect('semcor.db', isolation_level=None)
cur = con.cursor()
engtable = "CREATE TABLE eng(uid INT, corpusname TEXT, docname TEXT,"+\
"pnum INT, snum INT, tnum INT,"+\
"word TEXT, pos TEXT, sensetag TEXT, ne TEXT)"
cur.execute(engtable)
cur.executemany("INSERT INTO eng VALUES(?,?,?,?,?,?,?,?,?,?)", \
wordtokens)
The purpose of the database is so that I can run queries as such
SELECT * from ENG if paragraph=1;
SELECT * from ENG if sentence=1;
SELECT * from ENG if sentence=1 and pos="NN" or sensetag="87654321-n"
SELECT * from ENG if pos="NN" and sensetag="87654321-n"
SELECT * from ENG if docfilename="br-392"
SELECT * from ENG if corpusname="foobar"
It seems like when I structure the database as such, my size of database
explodes because the number of tokens in each corpus can go up to millions
or billions.
Other than structuring a corpus by having each row for a word and the
columns its attribute and parental attribute, how else could i structure
the database such I can perform the queries and get the same output?
For the purpose of indexing large size corpus, 1. should I be using some
other database programs other than sqlite3? 2. And should i still use the
same schema for the table as I have defined above?
No comments:
Post a Comment