About 91 results
Open links in new tab
  1. Optimizers - Keras

    Base Optimizer API These methods and attributes are common to all Keras optimizers. [source] Optimizer class keras.optimizers.Optimizer()

  2. Optimizers - Keras

    Optimizers SGD RMSprop Adam AdamW Adadelta Adagrad Adamax Adafactor Nadam Ftrl [source] apply_gradients method Optimizer.apply_gradients( grads_and_vars, name=None, …

  3. SGD - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  4. Muon - Keras

    learning_rate: A float, keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use. The learning rate.

  5. Adamax - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  6. Ftrl - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  7. Adam - Keras

    learning_rate: A float, a keras.optimizers.schedules.LearningRateSchedule instance, or a callable that takes no arguments and returns the actual value to use.

  8. LearningRateSchedule - Keras

    Several built-in learning rate schedules are available, such as keras.optimizers.schedules.ExponentialDecay or keras.optimizers.schedules.PiecewiseConstantDecay:

  9. Adagrad - Keras

    Note that Adagrad tends to benefit from higher initial learning rate values compared to other optimizers. To match the exact form in the original paper, use 1.0.

  10. Learning rate schedules API - Keras

    Keras documentation: Learning rate schedules API Learning rate schedules API LearningRateSchedule ExponentialDecay PiecewiseConstantDecay PolynomialDecay InverseTimeDecay CosineDecay …