memtorch.map

Submodule containing various mapping, scaling, and encoding methods.

memtorch.map.Input

Encapsulates internal methods to encode (scale) input values as bit-line voltages. Methods can either be specified when converting individual layers:

from memtorch.map.Input import naive_scale

m = memtorch.mn.Linear(torch.nn.Linear(10, 10),
                       memtorch.bh.memristor.VTEAM,
                       {},
                       tile_shape=(64, 64),
                       scaling_routine=naive_scale)

or when converting torch.nn.Module instances:

import copy
from memtorch.mn.Module import patch_model
from memtorch.map.Input import naive_scale
import Net

model = Net()
patched_model = patch_model(copy.deepcopy(model),
                            memtorch.bh.memristor.VTEAM,
                            {},
                            module_parameters_to_patch=[torch.nn.Linear],
                            scaling_routine=naive_scale)
memtorch.map.Input.naive_scale(module, input, force_scale=False)[source]

Naive method to encode input values as bit-line voltages.

Parameters:
  • module (torch.nn.Module) – Memristive layer to tune.
  • input (torch.tensor) – Input tensor to encode.
  • force_scale (bool, optional) – Used to determine if inputs are scaled (True) or not (False) if they no not exceed max_input_voltage.
Returns:

Encoded voltages.

Return type:

torch.Tensor

Note

force_scale is used to specify whether inputs smaller than or equal to max_input_voltage are scaled or not.

memtorch.map.Module

Encapsulates internal methods to determine relationships between readout currents of memristive crossbars and desired outputs.

Warning

Currently, only naive_tune is supported. In a future release, externally-defined methods will be supported.

memtorch.map.Module.naive_tune(module, input_shape, verbose=True)[source]

Method to determine a linear relationship between a memristive crossbar and the output for a given memristive module.

Parameters:
  • module (torch.nn.Module) – Memristive layer to tune.
  • input_shape (int, int) – Shape of the randomly generated input used to tune a crossbar.
  • verbose (bool, optional) – Used to determine if verbose output is enabled (True) or disabled (False).
Returns:

Function which transforms the output of the crossbar to the expected output.

Return type:

function

memtorch.map.Parameter

Encapsulates internal methods to naively map network parameters to memristive device conductance values. Methods can either be specified when converting individual layers:

from memtorch.map.Parameter import naive_map

m = memtorch.mn.Linear(torch.nn.Linear(10, 10),
                       memtorch.bh.memristor.VTEAM,
                       {},
                       tile_shape=(64, 64),
                       mapping_routine=naive_map)

or when converting torch.nn.Module instances:

import copy
from memtorch.mn.Module import patch_model
from memtorch.map.Parameter import naive_map
import Net

model = Net()
patched_model = patch_model(copy.deepcopy(model),
                            memtorch.bh.memristor.VTEAM,
                            {},
                            module_parameters_to_patch=[torch.nn.Linear],
                            mapping_routine=naive_map)
memtorch.map.Parameter.naive_map(weight, r_on, r_off, scheme, p_l=None)[source]

Method to naively map network parameters to memristive device conductances, using two crossbars to represent both positive and negative weights.

Parameters:
  • weight (torch.Tensor) – Weight tensor to map.
  • r_on (float) – Low resistance state.
  • r_off (float) – High resistance state.
  • scheme (memtorch.bh.crossbar.Scheme) – Weight representation scheme.
  • p_l (float, optional) – If not None, the proportion of weights to retain.
Returns:

Positive and negative crossbar weights.

Return type:

torch.Tensor, torch.Tensor