Open conduit - call each component and see if they can provide a conduit that can satisfy all these attributes - return the conduit id (a negative value indicates error)
rml_base_stubs.c
1
orte_rml_API_open_conduit
遍历 active 的 rml mod, 调用各个 rml mod 的 open_conduit, 例如 oob (rml_oob_component.c), 返回 mod 之后,存入 array,返回 array index
param = NULL; if (ORTE_SUCCESS != (rc = orte_regx.nidmap_create(orte_node_pool, ¶m))) { ORTE_ERROR_LOG(rc); return rc; } if (NULL != orte_node_regex) { free(orte_node_regex); } orte_node_regex = param; /* if this is too long, then we'll have to do it with * a phone home operation instead */ if (strlen(param) < orte_plm_globals.node_regex_threshold) { opal_argv_append(argc, argv, "-"OPAL_MCA_CMD_LINE_ID); opal_argv_append(argc, argv, "orte_node_regex"); opal_argv_append(argc, argv, orte_node_regex); /* mark that the nidmap has been communicated */ orte_nidmap_communicated = true; }
# download the openmpi-v4.0.0.tar.gz from the official website # untar and run configure ./configure --prefix=/usr/local/openmpi --enable-orterun-prefix-by-default # make and install make -j $(nproc) all make install
An easy example framework to discuss is the MPI framework named “btl”, or the Byte Transfer Layer. It is used to send and receive data on different kinds of networks. Hence, Open MPI has btl components for shared memory, TCP, Infiniband, Myrinet, etc.
-np: Run this many copies of the program on the given nodes. This option indicates that the specified file is an executable program and not an application context. If no value is provided for the number of copies to execute (i.e., neither the “-np” nor its synonyms are provided on the command line), Open MPI will automatically execute a copy of the program on each process slot (see below for description of a “process slot”). This feature, however, can only be used in the SPMD model and will return an error (without beginning execution of the application) otherwise.
–bind-to: Bind processes to the specified object, defaults to core. Supported options include slot, hwthread, core, l1cache, l2cache, l3cache, socket, numa, board, and none.
-x: Export the specified environment variables to the remote nodes before executing the program. Only one environment variable can be specified per -x option. Existing environment variables can be specified or new variable names specified with corresponding values. For example: % mpirun -x DISPLAY -x OFILE=/tmp/out … The parser for the -x option is not very sophisticated; it does not even understand quoted values. Users are advised to set variables in the environment, and then use -x to export (not define) them.
-mca: Send arguments to various MCA modules. See the “MCA” section, below.
注意到 openmpi 会使用所见的所有网络,如果你不想其使用 ip network,你可以显式的禁用之,然而
Note that Open MPI will still use TCP for control messages, such as data between mpirun and the MPI processes, rendezvous information during MPI_INIT, etc. To disable TCP altogether, you also need to disable the tcp component from the OOB framework.
所以回答上篇的问题 扫描镜像时,为何不把 layer union 之后,再扫描,看到这,诸位可能已经发现不好实现呀
能不能实现,当然能!
按照这里所说 loading-an-image-filesystem-changeset 1.1 untar root layer to a dir (act as container root fs) 1.2 untar each layer to the previous dir, and walk once, rm any file with .wh. prefix and its coresponding file 1.3 continue this process 1.4 … pay attention, 可能有童鞋会觉得这个细节可能因 storage driver 而异,实则不然,image tar archive 的格式是独立于 storage driver 的
https://docs.docker.com/storage/storagedriver/ Storage drivers allow you to create data in the writable layer of your container. The files won’t be persisted after the container is deleted, and both read and write speeds are low.
也是够精辟
不过我还是有个疑问,不同 storage driver 实现分层镜像的细节不同,docker save 的时候,是怎么把不同 storage driver 的 layer 能统一到 Image Tar File Archive 里面去的