Under the hood, Aperture stores information on an image in two places. The first is in a SQLite3 database, and the second is in an assortment of plist files. One would venture that this approach is used for data redundancy purposes and that the data is copied between the two methods on a frequent basis. If you choose to rebuild your Aperture Library by holding the option-key on startup, Aperture will recreate the SQLite3 database using the information stored in the plist files.
When discussing Aperture’s speed, people have rightfully focused on the computer’s processing power, the video card, or the speed of the hard drive. We should also take a look at Apple’s decision to use SQLite. From the SQLite website:
We are aware of no other embedded SQL database engine that supports as much concurrency as SQLite. SQLite allows multiple processes to have the database file open at once, and for multiple processes to read the database at once. When any process wants to write, it must lock the entire database file for the duration of its update. But that normally only takes a few milliseconds. Other processes just wait on the writer to finish then continue about their business. Other embedded SQL database engines typically only allow a single process to connect to the database at once.
I’m no SQLite expert by any means, but I wonder if this is the cause for Aperture’s close friendship with the the SBBOD (Spinning Beach Ball of Death). I was at Eric’s house yesterday and we were trying to use Aperture to export an image for a magazine. Eric was importing images into his system while making simple adjustments and annotations to the image and we must have experienced the SBBOD half a dozen times for over 30 seconds each time! It got so annoying that Eric fired up Capture One on his PC, editing the file, developed it, sharpened the file in Photoshop, and annotated in BreezeBrowser in the time of one SBBOD session.
I’m working on Timeature right now and have successfully duplicate the “database locked” problem that some of my users are experiencing. The way I have coded up Timeature is that it will keep trying to write to the database after a short delay for ten attempts. Whenever this happens, I quickly jump back to Aperture to see if I can perform any editing operation. Invariably, I get the SBBOD. When is disappears, however, Timeature is able to complete its changes. I have no idea what Aperture is writing to the database, but it must be a lot of information, as evidence by the amount of SBBOD we are all seeing.
This leads me to believe that one of the biggest bottlenecks in Aperture is its use of SQLite3 as a backend database. I’m not sure what Apple will be doing to remedy this issue. Perhaps the use of plist files is a preliminary step in the right direction. By tuning when the application writes and refers to the plist file instead of the database, perhaps we can see some immediate speed improvements. Long-term, perhaps the solution is to find a database engine that supports concurrent writes?
I’ve uploaded the results from running sample() on Aperture while the database is locked. Perhaps this can shed some more light on the slowdowns. Some interesting portions from the sample reading include how long the process was waiting for each action to complete (aggregate time):
Sort by top of stack, same collapsed (when >= 5):
semaphore_wait_signal_trap 51445
mach_wait_until 3041
wait4 3001
mach_msg_trap 1999
read 796
semaphore_timedwait_signal_trap 617
sqlite3AuthContextPop 386
lseek 315
sqlite3VdbeExec 128
sqlite3BtreeMoveto 127
sqlite3BtreePrevious 91
sqlite3VdbeSerialGet 81
sqlite3BtreeKeySize 70
sqlite3GetVarint 60
sqlite3pager_unref 55
sqlite3VdbeRecordCompare 53
sqlite3OsCurrentTime 47
__memcpy 44
sqlite3GetVarint32 41
sqlite3pager_pagecount 39
sqlite3MemCompare 38
sqlite3pager_get 38
sqlite3BtreeData 37
gldGetString 34
sqlite3VdbeSerialTypeLen 30
sqlite3VdbeMemRelease 28
szone_size 20
sqlite3BtreeCloseCursor 19
__spin_lock 15
sqlite3pager_pagenumber 14
szone_free 14
dyld_stub_memcpy 13
sqlite3VdbeCursorMoveto 12
szone_malloc 12
sqlite3RunVacuum 10
sqlite3VdbeMemShallowCopy 10
objc_msgSend_rtp 9
sqlite3BtreeDataFetch 9
malloc 8
sqlite3VdbeMemFromBtree 7
sqlite3pager_lookup 7
free 6
memmove 6
fcntl 5
objc_msgSend 5
sqlite3BtreeDataSize 5
sqlite3OsRead 5
sqlite3VdbeMemMakeWriteable 5
Also how many times a particular call was made:
Total number in stack (recursive counted multiple, when >=5):
82 gldGetString
73 sqlite3AuthContextPop
54 CFArrayApplyFunction
47 semaphore_wait_signal_trap
45 NSProApplicationLoad
44 pthread_cond_wait
34 -[NSView _recursiveDisplayRectIfNeededIgnoringOpacity:isVisibleRect:rectIsVisibleRectForView:topView:]
31 mach_msg
31 mach_msg_trap
28 -[NSRecursiveLock lock]
28 -[NSView _recursiveDisplayAllDirtyWithLockFocus:visRect:]
28 -[NSView visibleRect]
28 _decodeObjectBinary
27 -[NSManagedObjectContext(_NSInternalAdditions) lockObjectStore]
25 sqlite3pager_unref
21 _recursiveDisplayInRect2
20 _CFRelease
20 _pthread_body
19 _decodeObject
17 forwardMethod
17 sqlite3BtreeKeySize
16 -[NSView _drawRect:clip:]
16 forkThreadForFunction
14 __memcpy
13 -[NSConditionLock lockWhenCondition:]
13 __CFBinaryPlistCreateObject
13 sqlite3pager_get
12 -[NSView _windowChangedKeyState]
12 __spin_lock
11 -[NSView _propagateDirtyRectsToOpaqueAncestors]
11 -[NSView hitTest:]
11 ripc_Render
11 sqlite3BtreeData
Lots of waiting and obviously lots of SQLite calls. Any thoughts?
Update 11/9/2006: Found some more interesting information on Apple’s Developer Website on CoreDate and SQLite. My emphasis in bold:
Saving to a SQLite Store
When writing data, SQLite ensures that the bytes are actually written through to the drive platter. Since fsync does not make the guarantee this on Mac OS X, SQLite sends a F_FULLFSYNC request to the kernel. This causes the kernel to flush all buffers to the drives and causes the drives to flush their track caches. Without this, there is a significantly large window of time within which data will reside in volatile memory—and in the event of system failure you risk data corruption. The same calls are not made for XML or Binary stores—since they are atomic, there is a much lower likelihood of data loss that involves corruption of the file, especially since the writes are typically atomic and the old file is not deleted until the new has been successfully written.
This behavior means that, in some situations, saving even a small set of changes to an SQLite store can considerably longer than saving to, for example, an XML store (for example, where saving to an XML file might take less than a hundredth of a second, saving to an SQLite store may take almost half a second). Since SQLite is updating part of a file, and loss of that partial update would be catastrophic, Core Data (through SQLite) goes to greater lengths to ensure that the bytes are really written.
You can set the defaults key com.apple.CoreData.SQLiteDebugSynchronous to one of three values, 0 (OFF), 1 (NORMAL), or 2 (FULL)
FULL: all disk syncing is performed via the fctl FULL_FSYNC command—a costly but rock-solid “write my bits to disk” operation.
We got burned in SQLite comparisons against Linux and such – since sqlite on other platforms used the fsync() operation to sync bits to disk (a cheaper but less robust disk syncing operation).
One could launch Aperture with com.apple.CoreData.SQLiteDebugSynchronous
set to 0 to see if this improves performance. I’m hesitant to do this on my primary library, because one little mistake might cause the whole SQLite database to become corrupted. My SQLite database file is already 630MB in size. I can easily see how locking and editing a small part of that file can be fast for an individual change, but really long when you take in account dozens or hundreds of changes happening at the same or similar time.
Your 10-retry loop sounds like Apeture is sitting in a transaction, and Timeature is simply unable to get a write lock until all other locks have been cleared. Can you get at the sqlite API itself and register a new busy handler? If so, you’ll be able to see a lot more of what’s happening.
If Apeture is spinning its beachball for 30 seconds, and it’s the only thing hitting the database, something other than sqlite lock/concurrency is happening. Sqlite is fast when only one application is doing the reads/writes to the database file. Perhaps the problem is more mundane than locks, and the schema needs some more indexes.
begin transaction; select ; update
Sleep(24 hours)
commit;
No one can write to the database until the above transaction finishes — tomorrow. Sqlite doesn’t time an open transaction and close it because it decides you’ve had it open too long. Maybe Apeture assume it’s the only one who would want to use this database, keeps transactions like the above open too long, and is just not a nice database citizen in general.
From a data-conention point of view… The problem with sqlite (as you quoted) is it locks everyone out of the whole database file when someone gets a write lock to the database — instead of just locking the range of bytes to be modified.
Sqlite is really fast, small, pretty safe, and experiences tons of concurrency. But there’s this darkside where applications can actually deadlock each other trying to write. It’s not common this happens, but it’s not hard to make it happen either. The upshot is that the bowels of sqlite detects the deadlock and punts to an application-definable busy-handler to resolve. Yes, the application must fix a deadlock caused inside of sqlite. Nice, huh? The stock busy handler just sleeps a few ms, wakes up, tries to get a lock, oops, can’t, goes back to sleep. Neither app progresses as both are waiting for the other app to drop its lock. Think dining philosophers problem. Resolution comes when one application gives up (assuming one does) and relinquishes its lock, and therefore allows the other application to acquire its write lock and continue.
(for those curious/insane, this happens if APP1 is in a transaction and has acquired a SHARED lock and is going to elevate to EXCLUSIVE. Meanwhile, APP2 acquires locks SHARED->RESERVED->EXCLUSIVE(fails as APP1 has a SHARED lock)->PENDING. Deadlock. APP1 can’t go SHARED->RESERVED->EXCLUSIVE as APP2 has a lock, and APP2 can’t go PENDING->EXCLUSIVE as APP1 has a lock. Neither can progress and neither can go down in lock levels as both are in transactions. APP1 and APP2 busy handlers get called… until someone gives up trying… maybe 30 seconds later, maybe an hour. I can name this tune in three SQL statements)
I could go on and on about sqlite contention as I’ve spent many hours going through gigabytes of log files trying to hunt down deadlocks and 30-second gaps where all processes just did nothing while everyone should have been writing. The sqlite (www.sqlite.org) documentation covers everything, but it didn’t really sink in until I saw deadlocks happening, and put in statements in sqlite to print out all the lock progressions and walk through everything in all apps. It’s more fun than it sounds. I eventually changed our build of sqlite to reduce concurrency but eliminate the chances of this deadlock… but that’s another story.
sqlite3OsSync
sqlite3RunVacuum
Vacuum? Oh man. It’s basically defragging as it goes, as opposed to letting the user do it. Doesn’t the mac do a defrag after an app installs anyway? Why is it vacuuming now?
The OsSync is also slow. It’s doing a full filesystem flush on each write. All of this can be tweaked with sqlite pragmas, to increase speed. It looks like it’s in slow-mode.
http://sqlite.org/pragma.html
PRAGMA fullfsync
PRAGMA auto_vacuum
I didn’t see any contention problems in the stack trace. Grep for “busy” in your stacks to see those.. Basicaly, “busy” function followed by a sleep indicates a contention issue.
I don’t know enough objectiveC, but this is sort of interesting. Look at all this code between the sqlite3lock (locks are always interesting to look for) and the next sqlite statement. What is all that? The implementation of the lock? What is NSSQLCore and why is it between a sqllock and the next sqlite statement? heh.
1 sqlite3OsLock
1 fcntl
1 fcntl
1 NSPopAutoreleasePool
1 NSPopAutoreleasePool
1 0x389a30
1 0x3899b4
1 0x3e4edc
1 0x3e5128
1 -[NSManagedObjectContext executeFetchRequest:error:]
1 -[NSPersistentStoreCoordinator(_NSInternalMethods) executeRequest:withContext:]
1 -[NSSQLCore executeRequest:withContext:]
1 -[NSSQLCore objectsForFetchRequest:inContext:]
1 -[NSSQLCore _newRowsForSelector:withArgument:andLimit:]
1 -[NSSQLChannel selectRowsWithFetchRequest:]
1 -[NSSQLiteConnection execute]
1 -[NSSQLiteConnection _execute]
1 sqlite3_step
The SQLCore stuff is probably the underlying code that handles the CoreData calls – one thing to keep in mind here is that Aperture is going through what is essentially an Object-.Relational mapping system to reach the database, and those can use the database in non-optimal ways as they attempt to fetch and flush items somewhat independantly of the database itself.
I think it’s a mixture of that, and also the indexes they have set up (if any, Aperture did not seem to have any a few versions ago). Indexes can make things slower on insert which is something Aperture does a lot of.
Thanks for looking into this, I had thought to do some investigation at some point but you’re doing a great job so far.
Did you know about the property you can set when launching Aperture that displays all the SQL executed against the database? I forget what that is offhand but icould find out if you were not already aware of it.
I hadn’t heard about the property for displaying all SQL executed in Aperture. If you could find it, that would be great!
Yeah, that’d be nice to see all the sql. If Apeture doesn’t provide this, you could try to sneak in a call to sqlite3_trace and provide your own callback (http://sqlite.org/capi3ref.html#sqlite3_trace). If you can’t sneak that in, you could try adding a trigger(s) to their table (http://sqlite.org/lang_createtrigger.html) to fire per write then in the trigger action, you update your own log table with their action. This will really slow things down of course, but it’s just a trick to develop a log of what they’re doing. I haven’t tried this, but it seems like fun.
Also, if the schema really doesn’t have indexes, you could open their database in your app, and add some indicies.
So many things to try..
Soory to contact you here, but I can’t found any email to contact you.
Hi,
I’m trying Timature : I’ve got a strange bug that is not in FAQ : after choose “Updates datesâ€, I choose “Ok†and then I obtain “AppleScript error : SQL error: near “847839960387E+8″: syntax error (1)â€
Do You have any idea of what append ?
(Do you have an email adresse to coommunicate with you ???)
Thanks,
Vincent, France.
Vincent:
A fix to this bug will be in the next release of TImeature, scheduled for release real soon now!
>I hadn’t heard about the property for displaying all SQL executed in Aperture. If you could find it, that would be great!
Is it this one?
“With Mac OS X version 10.4.3 and later, you can use the user default com.apple.CoreData.SQLDebug to log to stderr the actual SQL sent to SQLite. (Note that user default names are case sensitive.)”
From http://developer.apple.com/documentation/Cocoa/Conceptual/CoreData/Articles/cdTroubleshooting.html#//apple_ref/doc/uid/TP40002320-DontLinkElementID_75
Alternatively, starting aperture from the command line might also work:
$ /Applications/Aperture.app/Contents/MacOS/Aperture -com.apple.CoreData.SQLDebug 1
(this was posted a year ago on mungomash: http://www.mungosmash.com/archives/2005/12/the_aperture_da.php#more. He (Allen?) was also hunting for clues, but he seems to have found better things to do :-)
Cheers,
Peter.
Just some other things that could be checked in the database, to increase perf:
* Increase SQLITE_DEFAULT_PAGE_SIZE in pager.h to increase select performance (won’t affect existing databases though, unless you do a lot more work)
* Doing an ‘order by’ on a column that is not the index causes a new table to be created -on disk- , sorted on disk, then read back as the result set.
* Moving BLOBs around results in a malloc(sizeof(blob)) — blob’s are not chunked in or out.
Peter:
I ran Aperture from the command line and it does display SQL output to stdout. That said, maybe of the parameters in the SQL display as question marks. For instance:
CoreData: sql: SELECT 0, R.Z_PK, R.Z_OPT, R.ZNAME, R.ZPARENT FROM ZRKKEYWORD R WHERE ( R.ZNAME = ? AND R.ZPARENT IS NULL )
Do you know of any way to have the replace the ? characters with the proper parameter values?
Answered my own question. I can bumb up the value for com.apple.CoreData.SQLDebug 2 to display more information. This makes performance much slower, however (it grinds to a crawl).
Posted some information in the article about saving to a SQLite store in CoreData.
The com.apple.CoreData.SQLDebug command-line property was exactly what I was thinking of.
You went beyond where I had explored though as I had not noticed that you could increase the value beyond “1” to get parameter data!
Possibly you could attach a debugger and somehow set that property on the fly during a particular time of interest? I had tried attaching some of Apple’s profiling tools in the past, and they will attach although I could not gain much of use from the attempt.
Interesting information about the length of time you may see saving to an SQLite store…
As per Adam’s quoted text.. 500ms for a single write? That’s beyond slow. I wonder what “greater lengths” they’re going through to ensure the sqlite data is written. I’m tending to believe that every safe vs fast option has been skewed towards ‘safe’ in the CoreData’s call to sqlite just because the switch is provided. The xml update reference is doing what, updating a text file on disk? SQLite is doing the same, but with a binary file. I can’t see how the window for corruption is different for the xml case vs the sqlite case. Why turn on every slow-switch in the sqlite case? Because sqlite provdies the switches to be flipped? Does the the XML write have options to ‘defrag the file and flush the cache to disk after a write’ ? I’m guessing not, which is why they’re not set, which is why their xml case is so much faster…
Ah well, so much guessing.
Did you guys get to any conclusive answer regarding this? Did this occur before or after the last Aperture update?
Thanks
Marek
Marek:
Sorry, but I don’t have anything else to update. There’s an Aperture update coming in a few weeks, but I don’t know if there will be any speed improvements.
So, this project dead? I get the same SQL error and it really is a shame. I’ve got 2000 photos to correct, and no commandline tool is going to make that half a easy as Timeature would. :-(
All that talk about concurrency doesn’t make sense to me. Aperture is very slow for me when I am doing one thing: searching for all the photos that match a particular keyword. I assume and hope that that is not a parallel effort with any writing going on. Or, changing a photo’s rating while viewing a subset of my entire library: that operation takes 5 to 10 seconds to remove a photo from my current view! Come on! This is 2007.
Things like that should be instantaneous and instead they are impossibly slow.
I assume it’s SQLite and really I think it has to go. If you have to have your 5Ghz processing power (2.5 x 2) sitting idle while the disk heads are seeking, because the computer might crash or lose power, is unacceptable. It is a design flaw. There are ways to make operations safe without resorting to such drastic measures. Aperture is unusable unless you are very careful about how you use it, which is ridiculous for a professional line of software.
E.g., with 10k photos, I could load a binary and/or XML file into memory with the appropriate amounts of string sharing to represent all the meta data for each image, and I could load it in 10ms and save it in 10ms. All queries can be done in memory. Changes will be made and written to this file after a few seconds of inactivity, with a final rename of a tmp file to the destination file ensuring data consistency. Then, if there is an operating system or application crash, you lose a few seconds or minutes of changes, but never a whole image or anything like that. That kind of risk is certainly worth the reward of not sitting there watching the spinning ball of death.
I hope we can agree that Aperture is crippling the impressive modern hardware in unacceptable ways, where even throwing faster disks and more RAM at the problem has almost ZERO impact on the performance. No matter how fast a disk gets, it’s always going to be many orders of magnitude slower than multiple GHz of processing power. What we need here is a little creative engineering to solve this problem. Put another way: if Spotlight can search the content of every single file on my hard drive in a small number of seconds, giving real time feedback AS I TYPE, then aperture should be able to do the same only WAY better in the relatively small domain of its photo database.
I just thought I would mention that I have tested Aperture 2.0, and the meta-data slowness problems have been solved, finally.