Skip to content

Commit ffcdde3

Browse files
author
Nick Kallen
committed
better docs
1 parent 6369b07 commit ffcdde3

File tree

1 file changed

+30
-24
lines changed

1 file changed

+30
-24
lines changed

README.markdown

+30-24
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,13 @@
1-
== Howto ==
2-
=== What kinds of queries are supported? ===
1+
## What is Cache Money ##
32

4-
In general, any query involving equality (=) and conjunction (AND) is supported by `Cache Money`. Disjunction (OR) and inequality (!=, <=, etc.) are not typically materialized in a hash table style index and are unsupported at this time.
3+
Cache Money is a write-through and read-through caching library for ActiveRecord.
4+
5+
Read-Through: Queries like `User.find(:all, :conditions => ...)` will first look in Memcached and then look in the database for the results of that query. If there is a cache miss, it will populate the cache.
6+
7+
Write-Through: As objects are created, updated, and deleted, all of the caches are *automatically* kept up-to-date and coherent.
8+
9+
## Howto ##
10+
### What kinds of queries are supported? ###
511

612
Many styles of ActiveRecord usage are supported:
713

@@ -15,35 +21,35 @@ Many styles of ActiveRecord usage are supported:
1521

1622
As you can see, the `find_by_`, `find_all_by`, hash, array, and string forms are all supported.
1723

18-
Queries with joins/includes are unsupported at this time.
24+
Queries with joins/includes are unsupported at this time. In general, any query involving just equality (=) and conjunction (AND) is supported by `Cache Money`. Disjunction (OR) and inequality (!=, <=, etc.) are not typically materialized in a hash table style index and are unsupported at this time.
1925

2026
Queries with limits and offsets are supported. In general, however, if you are running queries with limits and offsets you are dealing with large datasets. It's more performant to place a limit on the size of the `Cache Money` index like so:
2127

2228
DirectMessage.index :user_id, :limit => 1000
2329

2430
In this example, only queries whose limit and offset are less than 1000 will use the cache.
2531

26-
=== Multiple indices are supported ===
32+
### Multiple indices are supported ###
2733

2834
class User
2935
index :id
3036
index :screen_name
3137
index :email
3238
end
3339

34-
==== with_scope support ====
40+
#### with_scope support ####
3541

3642
`with_scope` and the like (`named_scope`, `has_many`, `belongs_to`, etc.) are fully supported. For example, `user.devices.find(1)` will first look in the cache if there is an index like this:
3743

3844
class Device
3945
index [:user_id, :id]
4046
end
4147

42-
=== Ordered indices ===
48+
### Ordered indices ###
4349

44-
class Message
45-
index :sender_id, :order => :desc
46-
end
50+
class Message
51+
index :sender_id, :order => :desc
52+
end
4753

4854
The order declaration will ensure that the index is kept in the correctly sorted order. Only queries with order clauses compatible with the ordering in the index will use the cache:
4955

@@ -62,19 +68,19 @@ will support queries like:
6268

6369
Note that ascending order is implicit in index declarations (i.e., not specifying an order is the same as ascending). This is also true of queries (order is not nondeterministic as in MySQL).
6470

65-
=== Window indices ===
71+
### Window indices ###
6672

67-
class Message
68-
index :sender_id, :limit => 500, :buffer => 100
69-
end
73+
class Message
74+
index :sender_id, :limit => 500, :buffer => 100
75+
end
7076

7177
With a limit attribute, indices will only store limit + buffer in the cache. As new objects are created the index will be truncated, and as objects are destroyed, the cache will be refreshed if it has fewer than the limit of items. The buffer is how many "extra" items to keep around in case of deletes.
7278

73-
=== Calculations ===
79+
### Calculations ###
7480

7581
`Message.count(:all, :conditions => {:sender_id => ...})` will use the cache rather than the database. This happens for "free" -- no additional declarations are necessary.
7682

77-
=== Transactions ===
83+
### Transactions ###
7884

7985
Because of the parallel requests writing to the same indices, race conditions are possible. We have created a pessimistic "transactional" memcache client to handle the locking issues.
8086

@@ -89,7 +95,7 @@ The writes to the cache are buffered until the transaction is committed. Reads w
8995

9096
Writes are not truly atomic as reads do not pay attention to locks. Therefore, it is possible to peak inside a partially committed transaction. This is a performance compromise, since acquiring a lock for a read was deemed too expensive. Again, the critical region is as small as possible, reducing the frequency of such "peeks".
9197

92-
==== Rollbacks ====
98+
#### Rollbacks ####
9399

94100
CACHE.transaction do
95101
CACHE.set(k, v)
@@ -100,29 +106,29 @@ Because transactions buffer writes, an exception in a transaction ensures that t
100106

101107
Nested transactions are fully supported, with partial rollback and (apparent) partial commitment (this is simulated with nested buffers).
102108

103-
=== Mocks ===
109+
### Mocks ###
104110

105111
For your unit tests, it is faster to use a Memcached mock than the real deal. Just place this in your initializer for your test environment:
106112

107113
$memcache = Cash::Mock.new
108114

109-
=== Locks ===
115+
### Locks ###
110116

111-
In most locks are unnecessary; the transactional memcache client will take care locks for you automatically and guarantees that no deadlocks can occur. But for very complex distributed transactions, shared locks are necessary.
117+
In most cases locks are unnecessary; the transactional Memcached client will take care locks for you automatically and guarantees that no deadlocks can occur. But for very complex distributed transactions, shared locks are necessary.
112118

113119
$lock.synchronize('lock_name') do
114120
$memcache.set("key", "value")
115121
end
116122

117-
=== Local Cache ===
123+
### Local Cache ###
118124

119125
Sometimes your code will request the same cache key twice in one request. You can avoid a round trip to the Memcached server by using a local, per-request cache. Add this to your initializer:
120126

121127
$local = Cash::Local.new($memcache)
122128
$cache = Cash::Transactional.new($local, $lock)
123129

124-
== Installation ==
125-
==== Step 1: `config/initializers/cache_money.rb` ====
130+
## Installation ##
131+
#### Step 1: `config/initializers/cache_money.rb` ####
126132

127133
Place this in `config/initializers/cache_money.rb`
128134

@@ -136,7 +142,7 @@ Place this in `config/initializers/cache_money.rb`
136142
is_cached :repository => $cache
137143
end
138144

139-
==== Step 2: Add indices to your ActiveRecord models ====
145+
#### Step 2: Add indices to your ActiveRecord models ####
140146

141147
Queries like `User.find(1)` will use the cache automatically. For more complex queries you must add indices on the attributes that you will query on. For example, a query like `User.find(:all, :conditions => {:name => 'bob'})` will require an index like:
142148

0 commit comments

Comments
 (0)