divided? it seems.. pointless. i didn't got there today, but i updated the notes with information on the cfs's "header".
new interesting things on CFS
- cluster on NJB is 0x10 sectors, on ZVM - 0x40 sectors
- CFS's adresation is actually overlapping with MINIFS: CFS's physical sector 0x000000 is seen by CFS as a *second* sector of its very first physical cluster, and the last sector of MINIFS is seen as that first sector.. This is why the first cluster is never-ever used, and logical ID of -1/0xFFFFFFFF and filled with 0x00s or 0xFFs
- cluster 0x02 contain volume-specific information, and cluster 0x03 probably too. for now, i'll call them volumeinformation1 (VI1) and volumeinformation2 (VI2) respectively
the VI1 contains very important information. currently known are:
- cluster size
- volumesize
- signature 'BFS1'
- root directory inode number! no more hardcoding!
- timestamp. what did you do on 2005-08-01 01:51:07 ?
that probably was the time when the drive was formatted and volume created
----------
and the bitmaps [notes/minifs/bitmaps.txt]
all, or almost all of the lists that must be traversed does have a special 'bitmap' that tells which entries are used and which are free. there are for example:
- drive:cluster bitmap <-> all clusterchains
- a directory:direntry bitmap<-> that directory's list of entries
and may be others, i dont recall right now.
each entry on the list has a zero-based position. so does the bits on the bitmap. each bit tells if the entry is used. bit=0 indicates that it is free, bit=1 that it is used, for example:
having a dir entries' list:
0 empty
1 empty
2 somefilename
3 empty
4 somefilename
5 somefilename
6 empty
7 empty
8 SoMeFiLeNaMe
9 empty
A somefilename
B somefilename
C somefilename
D somefilename
E empty
F somefilename
the bitmap would contain: 0 0 1 0 1 1 0 0 1 0 1 1 1 1 0 1, that is b1011110100110100, that is 0xBD34
BUT, would the bitmap actually contain 0xBC34, that would mean, that the SoMeFiLeNaMe entry if free - despite having well-looking contents! - for example, the file could have been deleted long time ago and its unlinked data could have already be overwritten with new content
the same follows with cluster bitmap. the cluster bitmap is a bitmap that indexes whole volume's clusterspace. each cluster does have a corresponding bit in the bitmap and is subject to be checked against it. if a clusterchain contains an clusterid which is marked as free (bit=0), this clearly indicates that the chain and/or the bitmap has been corrupted. this is serious, because when new files are written/added to the volume, the system searches for free space using the *bitmap*. actually, this is the main reason for its existence: looking for holes that may be reused. traversing a list of 0/1 bits is waaay faster than scanning clusterchains and looking which clusterids are unused
---------
i have started looking into CFS directory structure. the whole volume format matches the old one, with respect to changed clustersize. the inodes' and directories' structure is the same, too. the directories take 1632 entries in total of 0x80 sectors which translates to 8clusters in NJB and 2 clusters in ZVM.
on yours (mcuelenaere) drive image, in the root directory, I have found several new entries:
inode=0x00000009, fnlen = 0x0008, unk = 0x0002, {archives\0,} [old]
inode=0x0000000E, fnlen = 0x0003, unk = 0x0002, {pim\0,} [new!]
inode=0x00000013, fnlen = 0x0009, unk = 0x0002, {playlists\0,} [old]
inode=0x00000018, fnlen = 0x000A, unk = 0x0002, {recordings\0,} [old]
inode=0x0000001D, fnlen = 0x0005, unk = 0x0002, {songs\0,} [old]
inode=0x00000022, fnlen = 0x0006, unk = 0x0002, {system\0,} [old]
inode=0x00000027, fnlen = 0x0006, unk = 0x0002, {photos\0,} [new]
inode=0x0000002C, fnlen = 0x0006, unk = 0x0002, {videos\0,} [new]
inode=0x00000031, fnlen = 0x0004, unk = 0x0002, {vdir\0,} [new]
inode=0x00000036, fnlen = 0x0005, unk = 0x0002, {vrefs\0,} [new]
inode=0x0000003B, fnlen = 0x0004, unk = 0x0002, {VFAT\0,} [new]
inode=0x00000044, fnlen = 0x0006, unk = 0x0002, {albums\0,} [new]
inode=0x00000000,
please note the vdir, vrefs, VFAT:
vdir is a directory, with 9 valid entries, but the entries' filenames are "damaged" (instead of the name, 0xE59FF018 is repeated)
vrefs - unknown. inode looks like a inode of a directory. contents - are almost totally zeroed. if it is a directory, it is empty
vfat - a directory, with ONE entry named VFSYS [bingo]. so.. you might have something wrong in your reader - as it was told, your reader reports several files here. maybe you misplaced the directory for vdir somewhere?
--------
to speedup your searches - on the CF122,25mb image you have sent me:
- the contents of VFAT directory are on drive's physical sector
- the direntry for VFSYS file is: {inode=0x00000040, fnlen = 0x0005, unk = 0x0001, {VFSYS\0,} }
- the VFSYS file's inode is on sector
- the VFSYS file's clusters are ... , 100% consecutively
on the image the VFSYS file has is 0x30 bytes, as follows:
Offset 0 1 2 3 4 5 6 7 8 9 A B C D E F
00000000 03 00 B0 00 54 41 46 56 00 00 00 00 00 00 00 00 ..°.TAFV........
00000010 00 00 00 00 00 00 00 00 00 00 00 00 FF FF FF FF ............˙˙˙˙
00000020 00 00 00 00 00 00 00 00 00 00 00 00 FF FF FF FF ............˙˙˙˙
...small, eh? had construction of the virtualvolume on CF card failed?
-----
to get to the VFSYS file's inode:
- determine raw position of the CFS
- read root directory inode (1 cluster, 64sectors)
- read the two clusters firstclasschain[0] and firstclasschain[1] (2*64 sectors) that contain the root directory
- find VFAT entry
- store te inode numer from that entry
- read that inode
- read the two clusters firstclasschain[0] and firstclasschain[1] (2*64 sectors) that contain the vfat directory
- find VFSYS entry
- store te inode numer from that entry
- read that inode
hurray, you have the file(s)