continue_text¶
basemode.continue_.continue_text
async def continue_text(
prefix: str,
model: str = "gpt-4o-mini",
*,
max_tokens: int = 200,
temperature: float = 0.9,
context: str = "",
strategy: str | None = None,
rewind: bool = False,
**extra,
) -> AsyncGenerator[str, None]
Stream a single continuation token-by-token.
Notes¶
- Model names are normalized before strategy selection.
strategyoverrides auto-detection.rewind=Truerewinds short trailing word fragments forsystem/few_shotstrategies.extrais forwarded to LiteLLM request kwargs.