acetn.utils package
Submodules
acetn.utils.benchmarking module
- acetn.utils.benchmarking.record_runtime(func)
A decorator that records the runtime of a function execution.
This function measures the time taken by the decorated function to execute. If a GPU is available, it uses CUDA events to measure the execution time in seconds. If no GPU is available, it falls back to using CPU time with time.time().
- Args:
func (function): The function whose runtime needs to be recorded.
- Returns:
A tuple of the function’s output and its runtime in seconds, or just the runtime if the function returns None.
- acetn.utils.benchmarking.record_runtime_ave(func, num_record=20, num_warmup=10)
A decorator that records the average runtime of a function over multiple executions.
This function runs the decorated function multiple times, discards the results of a warmup phase, and then calculates the average runtime over a specified number of executions. If a GPU is available, it uses CUDA events for timing. If profiling is enabled via environment variables, it also supports integration with nsys profiling.
- Args:
func (function): The function whose average runtime needs to be recorded. num_record (int, optional): The number of times to run the function for benchmarking. Default is 20. num_warmup (int, optional): The number of warmup iterations to run before benchmarking. Default is 10.
- Returns:
A tuple of the function’s output and its average runtime in seconds, or just the average runtime if the function returns None.
acetn.utils.distributed module
- acetn.utils.distributed.all_gather_tensor(tensor, rank, ws)
Gathers tensors from all workers in a distributed setting.
- Args:
tensor (Tensor): The tensor to be gathered. rank (int): The rank of the current worker. ws (int): The total number of workers in the distributed setup.
- Returns:
list: A list of gathered tensors from all workers.
- acetn.utils.distributed.finalize_distributed()
Finalizes the distributed environment and cleans up resources.
- acetn.utils.distributed.get_device_count(device)
Calculate device count for distributed setup on supported devices.
- acetn.utils.distributed.setup_distributed(device)
Initializes the distributed environment and device configuration.
- Args:
device (torch.device): The device to use (e.g. ‘cpu’ or ‘cuda’).
- acetn.utils.distributed.setup_distributed_print(rank)
Redirects print statements to only show from rank 0 in distributed execution.
- acetn.utils.distributed.setup_process_group(rank, world_size, device)
Initializes the process group for multi-device execution.
Sets the backend and assigns a device to each rank.
acetn.utils.logger module
- class acetn.utils.logger.Logger(name: str, log_level=20, log_file='debug.log')
Bases:
object- get_logger()
- acetn.utils.logger.log_device_info(device)
Print information about the computation device.
- acetn.utils.logger.log_evolve_start_message(dtau, dims, model)
Print evolution startup information.
- Args:
dtau: Imaginary-time step used in evolution. dims: Dictionary containing iPEPS dimensions. model: Model instance being used in evolution.
- acetn.utils.logger.log_initial_message(device, config)
Print startup information and configuration.
- Args:
config (dict): Original configuration dictionary.