Slider Settings


\

Config Preset

The Config Preset dropdown menu displays all of your saved and imported presets at the top, with the NovelAI defaults below. Each model comes with their own variety of default presets, with the majority tuned for creative writing, and some tuned for certain writing or generation styles. The pen icon beside the dropdown is for renaming the selected preset.

The Import and Export buttons can be used to share your preset or import others people have shared, and whenever you modify one of the settings below, the Update active Preset popup will appear below the dropdown. The Update active Preset dropdown allows you to save the current preset, Reset Changes back to the presets' original settings, or save your current changes as a new preset!


\

Generation Options

The Generation Options section of the page contains the three baseline generation settings: Randomness,Output Length, and Repetition Penalty. These settings are some of the clearest and easiest to adjust on the fly, and for the most part can be changed without having to adjust any of the Samplers below.

  • Randomness

    At its simplest, the Randomness value determines how much the probabilities of different tokens being generated are equalized, raising or lowering the difference between token probabilities but never rearranging their initial order.

    At values above 1, lower probability tokens have their chances increased to be closer to the higher probability tokens. At values below 1, higher probability tokens have their chances increased to be further away from the lower probability ones.

    For example, if we use the above image as our prompt and view the probabilities of the tokens to be generated after was...

    Randomness 1.0 Randomness 1.25

    You can see that the after probabilities of the top tokens have been lowered considerably, while the subsequent ones are not dropped as harshly. This gives those lower probability tokens a greater chance of generating despite having a lower probability than before.

  • Output Length

    The Output Length setting controls the maximum amount of text characters that the AI can generate per output, to a minimum of 4 and a maximum of 600 characters depending on your Subscription Tier. Be aware that longer output lengths can vary in quality due to the nature of the AI's generation, while shorter ones will tend to stay on topic better.

  • Repetition Penalty

    The Repetition Penalty slider applies a penalty to the probability of tokens that appear in context, with those that appear multiple times being penalized more harshly. Higher values apply a harsher penalty, so setting this slider too high can result in output degradation and other undesired behavior, while too low of a setting can cause the AI to continuously repeat words or punctuation. Making small adjustments to this setting can be helpful to guide your stories' pacing or focus. For example, if you wanted the AI to mention specific character names or details more often you would lower this slider, and increase it if you wanted it to use more varied word choices.


\

Advanced Options

Sampling

Sampling options below change how tokens are redistributed and trimmed when generating.

  • Mirostat

    Mirostat has two sliders, Tau and Learning Rate. This sampler attempts to keep text at a given complexity specified by the Tau value, with higher settings attempting to provide more complex text. The Learning Rate slider specifies how quickly the sampler adapts to context, with a setting of 1 being instantaneous, and easing up with lower settings. Combining this sampler with others in the Settings Order is not recommended.

  • Nucleus

    Nucleus sampling, also known as Top-P, chooses the highest probability token and adds up each subsequent token's probabilities until their sum reaches the set Nucleus value. All remaining tokens are cut, which increases output consistency, but many lower probability tokens will be lost in the process, sacrificing creativity. Small adjustments are recommended when experimenting, as lower settings on this slider remove more tokens.

  • Tail-Free

    Intended to replace Top K and Nucleus sampling, Tail-Free uses a mathematical formula to calculate the 'tail' of an outputs' probabilities. Tail, and the Tail-Free sampler is explained in detail in this blog post. It determines a threshold of the lowest probability tokens to be the 'tail' of the output's probability spread, then removes them. After this removal, the surviving tokens have their probabilities readjusted to compensate.

    To simplify, this setting helps trim some of what the formula considers the worst possible tokens from the bottom of your output's Logical Probabilities. Small adjustments are recommended when changing this slider, as the closer to 0 you set Tail-Free, the larger and more intense the threshold for tokens considered the worst becomes.

  • Top A

    The value you set for Top-A sampling defines a cutoff, or limit, on the lowest possible permitted token probability. Higher Top-A values are stricter, cutting more tokens, while setting Top-A lower cuts fewer tokens. This limit is adjusted based on the highest probability token, so for example: in a large pool where the top token has a low probability to begin with, the limit for cutting tokens will be adjusted to be lower than one with a higher probability and the same Top-A value applied.

  • Top K

    Top-K is the simplest sampler to use and adjust. When Top-K is applied, the pool of generated tokens will be limited to match your slider setting. For example, if you set Top-K to 10, the sampler will remove all tokens that aren't the top 10 most likely. The downside of this sampler is that if there are fewer tokens in the pool than your Top-K setting to begin with, the sampler will have no effect.

Goose Tip: Setting Top-K to 1 ensures you get the same token every time when retrying generations!

  • Typical

    Typical sampling is one of the more complex options available. On each output token generated, it calculates something called conditional entropy, which is a measure of the expected information content of the next token. If the calculated value is greater than or equal to your Typical setting, or below the negative value of your Typical setting, the token is trimmed.

  • Change Settings Order

The Order Settings window allows you to change the order of samplers, which are applied from top to bottom. Use the arrow buttons or drag each box individually to rearrange the sampling order, or toggle them with the buttons on the right. Temperature(Randomness) cannot be disabled.

The order in which you apply samplers can have unexpected and unpredictable effects, consider starting with a default config preset and experimenting.


\

Repetition Penalty

The options in the Repetition penalties section, as well as the Alternative Repetition Penalty section below, are all intended to make your generations less repetitive.

  • Phrase Repetition Penalty

    See the Advanced: Phrase Repetition Penalty page for more details.

  • Use Default Whitelist

    See the Repetition Penalty Whitelist page for a full list of whitelisted tokens.

  • Range

    Repetition Penalty Range is how many tokens, starting from the bottom of your Story Context, will have Repetition Penalty settings applied. When set to the minimum of 0 (off), repetition penalties are applied to the full range of your output, which is the same as having the slider set to the maximum of your Subscription Tier. This slider only functions when Dynamic Range is disabled.

  • Slope

    The Slope slider dictates what percentage of your set Repetition Penalties (all except for Phrase Repetition Penalty) are applied to tokens in context with respect to their distance from the most recent token in context. When disabled, no sloping is applied, and all penalties apply themselves as normal.

    When the Slope is set to a value below or equal to 1, only the final token receives 100% of the penalty values, with prior tokens experiencing a reduction in the penalty percentage which gets smoother and more gradual the closer your set Slope value is to 0. When set to 1 exactly, this reduction in percentage is an identical amount for each token that stacks, making your slope a straight upward line.

    At values above 1, the Slope changes from a straight line to a stair-step shape, becoming more intense the closer your Slope value gets to the maximum of 10. In this range, it is possible for multiple of the most recent tokens to receive 100% of your penalty values, however a "cliff edge" forms where prior tokens then suddenly have the penalty percentage reduced drastically. At a Slope value of 10, half of context recieves 100% of your penalty values, while the other half recieves no penalties at all.

  • Dynamic Range

    When enabled, the Dynamic Range toggle makes it so Repetition Penalty settings are only applied to Story text. This means that penalties are not applied to Memory, Author's Note, or Lorebook text within . Enabling this can allow the AI to mention lore or descriptions mentioned in those sections more often, and prevents the Range slider from being adjusted.


\

Alternative Repetition Penalty

The settings in the Alternative Repetition Penalty section are highly advanced features. Even small adjustments made to these sliders can have vast effects on your AI generations, often cutting out too many tokens and resulting in gibberish outputs. Use very small adjustments when experimenting with these, and be wary of your Range settings when doing so.

  • Presence

    Presence penalty functions similarly to the default Repetition Penalty slider, but only applies a flat penalty each time a token appears, rather than adjusting for how often they do. Very small adjustments are recommended when experimenting with Presence penalty, as setting it too high can result in punctuation tokens quickly being penalized out of generating.

  • Frequency

    Frequency penalty applies based on how frequently a token appears, penalizing more common tokens while easing up on less common ones. If set too high, Frequency can quickly degrade outputs, so very small adjustments should be made when experimenting.