experiment_impact_tracker package¶
Subpackages¶
Submodules¶
experiment_impact_tracker.compute_tracker module¶
-
class
experiment_impact_tracker.compute_tracker.
ImpactTracker
(logdir)¶ Bases:
object
-
get_latest_info_and_check_for_errors
()¶
-
launch_impact_monitor
()¶
-
-
experiment_impact_tracker.compute_tracker.
gather_initial_info
(log_dir)¶
-
experiment_impact_tracker.compute_tracker.
launch_power_monitor
(queue, log_dir, initial_info, logger=None)¶
-
experiment_impact_tracker.compute_tracker.
read_latest_stats
(log_dir)¶
experiment_impact_tracker.constants module¶
-
experiment_impact_tracker.constants.
load_regions_with_bounding_boxes
()¶ Loads bounding boxes as shapely objects.
- Returns
list of shapely objects containing regional geometries
- Return type
list
-
experiment_impact_tracker.constants.
read_terrible_json
(path)¶ Reads a slightly malformed json file where each line is a different json dict.
- Parameters
path (string) – the filepath to read from
- Returns
list of dictionaries
- Return type
[dict]
experiment_impact_tracker.create_graph_appendix module¶
-
experiment_impact_tracker.create_graph_appendix.
create_graphs
(input_path: str, output_path: str = '.', fig_x: int = 16, fig_y: int = 8, max_level=None)¶
-
experiment_impact_tracker.create_graph_appendix.
create_scatterplot_from_df
(df, x: str, y: str, output_path: str = '.', fig_x: int = 16, fig_y: int = 8)¶ Loads an executive summary df and creates a scatterplot from some pre-specified variables.
- Parameters
df ([type]) – [description]
x (str) – [description]
y (str) – [description]
output_path (str, optional) – [description]. Defaults to ‘.’.
fig_x (int, optional) – [description]. Defaults to 16.
fig_y (int, optional) – [description]. Defaults to 8.
-
experiment_impact_tracker.create_graph_appendix.
dateparse
(time_in_secs)¶
-
experiment_impact_tracker.create_graph_appendix.
handle_cpu_count_adjusted_average_load
(df)¶
experiment_impact_tracker.data_info_and_router module¶
experiment_impact_tracker.data_utils module¶
-
experiment_impact_tracker.data_utils.
load_data_into_frame
(log_dir, max_level=None)¶
-
experiment_impact_tracker.data_utils.
load_initial_info
(log_dir)¶
-
experiment_impact_tracker.data_utils.
log_final_info
(log_dir)¶
-
experiment_impact_tracker.data_utils.
safe_file_path
(file_path)¶
-
experiment_impact_tracker.data_utils.
write_csv_data_to_file
(file_path, data, overwrite=False)¶
-
experiment_impact_tracker.data_utils.
write_json_data_to_file
(file_path, data, overwrite=False)¶
-
experiment_impact_tracker.data_utils.
zip_data_and_info
(log_dir, zip_path)¶
-
experiment_impact_tracker.data_utils.
zip_files
(src, dst, arcname=None)¶ Compress a list of files to a given zip
- Parameters
@src – Iterable object containing one or more element
@dst – filename (path/filename if needed)
@arcname – Iterable object containing the names we want to give to the elements in the archive (has to correspond to src)
experiment_impact_tracker.get_region_metrics module¶
-
experiment_impact_tracker.get_region_metrics.
get_current_location
()¶
-
experiment_impact_tracker.get_region_metrics.
get_current_region_info
()¶
-
experiment_impact_tracker.get_region_metrics.
get_region_by_coords
(coords)¶
-
experiment_impact_tracker.get_region_metrics.
get_sorted_region_infos
()¶
-
experiment_impact_tracker.get_region_metrics.
get_zone_information_by_coords
(coords)¶
-
experiment_impact_tracker.get_region_metrics.
get_zone_name_by_id
(zone_id)¶
experiment_impact_tracker.stats module¶
-
experiment_impact_tracker.stats.
get_average_treatment_effect
(data1, data2)¶
-
experiment_impact_tracker.stats.
run_permutation_test
(all_data, n1, n2)¶
-
experiment_impact_tracker.stats.
run_test
(test_id, data1, data2, alpha=0.05)¶ Compute tests comparing data1 and data2 with confidence level alpha
Taken from: https://arxiv.org/abs/1904.06979 Please cite that work if using this function.
- Parameters
test_id – (str) refers to what test should be used
data1 – (np.ndarray) sample 1
data2 – (np.ndarray) sample 2
alpha – (float) confidence level of the test
- Returns
(bool) if True, the null hypothesis is rejected
experiment_impact_tracker.utils module¶
-
experiment_impact_tracker.utils.
gather_additional_info
(info, logdir)¶
-
experiment_impact_tracker.utils.
get_flop_count_tensorflow
(graph=None, session=None)¶
-
experiment_impact_tracker.utils.
get_timestamp
(*args, **kwargs)¶
-
experiment_impact_tracker.utils.
launch_power_monitorprocessify_func
(q, *args, **kwargs)¶
-
experiment_impact_tracker.utils.
processify
(func)¶ Decorator to run a function as a process. Be sure that every argument and the return value is pickable. The created process is joined, so the code does not run in parallel.