Documentation Index
Fetch the complete documentation index at: https://docs.flashnet.xyz/llms.txt
Use this file to discover all available pages before exploring further.
A position transfer hands ownership of a locked LP position from one wallet to another. The current owner signs a single intent; the server rewrites both the position record and the lock record atomically.
What a transfer does
A transfer moves these values from the sender to the recipient:
- V2 (constant-product): a chosen amount of
lp_shares from lp_shares[from] to lp_shares[to]. Sender keeps any residual.
- V3 (concentrated): the entire position at one tick range. The
v3_positions row at (pool_id, sender, tickLower, tickUpper) is deleted and reinserted under the recipient’s key.
The sender must already hold an active lock on the source position. Transfer requires a live lock; an unlocked or expired position is rejected.
What a transfer does not do
Owner-keyed accruals stay with the original owner and are never transferred:
- Host fees collected as a host (
v3_host_fees)
- Integrator fees collected as an integrator (
v3_integrator_fees)
- Free balance (
v3_user_pool_balances)
- V2 deposit principal accounting (
liquidity.principal)
The recipient gets the staked liquidity, not the side balances the sender accumulated.
Lock-follow semantics
The lock travels with the shares. The exact behavior depends on what the recipient already holds in the pool.
V3: lock follows whole position
The sender’s lp_locks row is rewritten under the recipient’s key with the same lockUntilTimestamp. The recipient cannot withdraw the position until the original expiry (or never, if the lock was indefinite).
V2: lock-follow gated by recipient_pre_shares
V2 stores one lp_locks row per (pool_id, owner) pair, applied to the entire lp_shares[owner] value. A naive lock-follow would let a sender impose their lock terms on the recipient’s pre-existing stake. The server prevents this:
| Recipient state at time of transfer | Lock applied to recipient |
|---|
lp_shares[recipient] == 0 (fresh in this pool) | New lp_locks row at the sender’s expiry |
lp_shares[recipient] > 0 (already a position) | No write. Recipient’s existing lock state is preserved verbatim |
The sender’s residual obeys the same rule: the sender’s lock row is deleted only when their balance reaches zero. Otherwise it stays in place at the original expiry.
The trade-off: a sender cannot guarantee that transferred shares stay locked at the recipient if the recipient already held an unlocked position in the pool. To deliver locked shares to an existing recipient, the recipient must call lockPosition themselves before or after the transfer.
Failure modes
| Condition | Outcome |
|---|
| Sender has no active lock | Rejected. No state changes. |
| Sender’s lock has expired | Rejected. Treat the same as no lock. |
newOwnerPublicKey equals the sender | Rejected client-side before signing. |
newOwnerPublicKey is malformed or all zeros | Rejected client-side. |
V2: lpTokensToTransfer is not a positive integer string | Rejected client-side. |
V2: requested amount exceeds lp_shares[sender] | Rejected after share lookup; no commit. |
V3: tickLower/tickUpper does not match a position the sender owns | Rejected on the position read. |
| V3: recipient already owns a position at the same tick range | Rejected. The recipient must move that position first. |
| Both V2 and V3 fields supplied in one call | Rejected client-side. Pick one shape. |
Replay protection runs at the gateway. A nonce reused inside the dedup window returns 409 before the transfer reaches the settlement service.
Scoping: V2 vs V3
| Pool type | Granularity | Required fields |
|---|
| Constant-product (V2) | Partial amount of LP shares | lpTokensToTransfer (positive integer string), newOwnerPublicKey |
| Concentrated (V3) | Whole position at one tick range | tickLower, tickUpper, newOwnerPublicKey |
V3 transfers always move the entire position at the named range. There is no partial V3 transfer; the closest equivalent is decreaseLiquidity followed by sending the unlocked output through Spark.
Next steps