aboutsummaryrefslogtreecommitdiffstats
path: root/fs/xfs/xfs_utils.c
diff options
context:
space:
mode:
authorPeter Korsgaard <jacmet@sunsite.dk>2011-03-30 09:48:22 -0400
committerWim Van Sebroeck <wim@iguana.be>2011-04-07 16:20:24 -0400
commitd856b418464024dba4c7e901bab74dfb9a030d2e (patch)
treee85cb723d3531b88fb0c032c06afbae926ad0f9b /fs/xfs/xfs_utils.c
parent8b9686ff4ddfdf45662024edd567920e6db87beb (diff)
watchdog: mpc8xxx_wdt: fix build
Since 1c48a5c93da6313 (dt: Eliminate of_platform_{,un}register_driver) mpc8xxx_wdt no longer builds as it tries to refer to a 'match' variable rather than ofdev->dev.of_match that it checks just before. Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk> Acked-by: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Diffstat (limited to 'fs/xfs/xfs_utils.c')
0 files changed, 0 insertions, 0 deletions
Add a README and tests for stream masking and next masking' href='/cgit/cgit.cgi/libsmctrl.git/commit/README.md?h=ecrts25-ae&id=8062646a185baa6d3934d1e19743ac671e943fa8'>8062646
aa63a02
8062646



2ad0e81

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132


















                                                                                                                                                                                               
                                                                                                                                         

                                                                                                       








                                                                                                     

                                                                                                                            
   
                                                                   







                                                                                                                                                                                                                                                                                         
                                                                            























                                                                                                                                                                         








                            






                                                                  

                                    

                         






                                                                                       
                       



                                         

                                                                        
                                            
                                                           
                                                                                                                                                                                                                                                                                         








                                                                                                                 
                                                









                                                                                                     
                                                                  



                                                                                                                                        

                                                                                                                                         
# libsmctrl: Quick & Easy Hardware Compute Partitioning on NVIDIA GPUs

This library was developed as part of the following paper:

_J. Bakita and J. H. Anderson, "Hardware Compute Partitioning on NVIDIA GPUs", Proceedings of the 29th IEEE Real-Time and Embedded Technology and Applications Symposium, pp. 54-66, May 2023._

Please cite this paper in any work which leverages our library. Here's the BibTeX entry:
```
@inproceedings{bakita2023hardware,
  title={Hardware Compute Partitioning on {NVIDIA} {GPUs}},
  author={Bakita, Joshua and Anderson, James H},
  booktitle={Proceedings of the 29th IEEE Real-Time and Embedded Technology and Applications Symposium},
  year={2023},
  month={May},
  pages={54--66},
  _series={RTAS}
}
```

Please see [the paper](https://www.cs.unc.edu/~jbakita/rtas23.pdf) and `libsmctrl.h` for details and examples of how to use this library.
We strongly encourage consulting those resources first; the below comments serve merely as an appendum.

## Run-time Dependencies
`libcuda.so`, which is automatically installed by the NVIDIA GPU driver.

## Building
To build, ensure that you have `gcc` installed and access to the CUDA SDK including `nvcc`. Then run:
```
make libsmctrl.a
```

If you see errors about CUDA headers or libraries not being found, your CUDA installation may be in a non-standard location.
Correct this error by explictly specifying the location of the CUDA install `make`, e.g.:
```
make CUDA=/playpen/jbakita/CUDA/cuda-archive/cuda-10.2/ libsmctrl.a
```

For binary backwards-compatibility to old versions of the NVIDIA GPU driver, we recommend building with an old version of the CUDA SDK.
For example, by building against CUDA 10.2, the binary will be compatible with any version of the NVIDIA GPU driver newer than 440.36 (Nov 2019), but by building against CUDA 8.0, the binary will be compatible with any version of the NVIDIA GPU driver newer that 375.26 (Dec 2016).

Older versions of `nvcc` may require you to use an older version of `g++`.
This can be explictly specified via the `CXX` variable, e.g.:
```
make CUDA=/playpen/jbakita/CUDA/cuda-archive/cuda-8.0/ CXX=g++-5 libsmctrl.a
```

`libsmctrl` supports being built as a shared library.
This will require you to distribute `libsmctrl.so` with your compiled program.
If you do not know what a shared library is, or why you would need to specify the path to `libsmctrl.so` in `LD_LIBRARY_PATH`, do not do this.
To build as a shared library, replace `libsmctrl.a` with `libsmctrl.so` in the above commands.

## Linking in Your Application
If you have cloned and built `libsmctrl` in the folder `/playpen/libsmctrl` (replace this with the location you use):

1. Add `-I/playpen/libsmctrl` to your compiler command (this allows `#include <libsmctrl.h>` in your C/C++ files).
2. Add `-lsmctrl` to your linker command (this allows the linker to resolve the `libsmctrl` functions you use to the implementations in `libsmctrl.a` or `libsmctrl.so`).
3. Add `-L/playpen/libsmctrl` to your linker command (this allows the linker to find `libsmctrl.a` or `libsmctrl.so`).
4. (If not already included) add `-lcuda` to your linker command (this links against the CUDA driver library).

Note that if you have compiled both `libsmctrl.a` (the static library) and `libsmctrl.so` (the shared library), most compilers will prefer the shared library.
To statically link against `libsmctrl.a`, delete `libsmctrl.so`.

For example, if you have a CUDA program written in `benchmark.cu` and have built `libsmctrl`, you can compile and link against `libsmctrl` via the following command:
```
nvcc benchmark.cu -o benchmark -I/playpen/libsmctl -lsmctrl -lcuda -L/playpen/libsmctrl
```
The resultant `benchmark` binary should be portable to any system with an equivalent or newer version of the NVIDIA GPU driver installed.

## Run Tests
To test partitioning:
```
make tests
./libsmctrl_test_global_mask
./libsmctrl_test_stream_mask
./libsmctrl_test_next_mask
```

To test that high-granularity masks override low-granularity ones:
```
make tests
./libsmctrl_test_stream_mask_override
./libsmctrl_test_next_mask_override
```

And if `nvdebug` has been installed:
```
make tests
./libsmctrl_test_gpc_info
```

## Supported GPUs

#### Known Working

- NVIDIA GPUs from compute capability 3.5 through 8.9, including embedded "Jetson" GPUs
- CUDA 6.5 through 12.6
- `x86_64` and Jetson `aarch64` platforms

#### Known Issues

- `global_mask` and `next_mask` cannot disable TPCs with IDs above 128
    - Only relevant on GPUs with over 128 TPCs, such as the RTX 6000 Ada
- Untested on non-Jetson `aarch64` platforms
- Untested on CUDA 11.8, 12.0, and 12.1 on Jetson `aarch64`
- Mask bit indexes do not directly correlate to software-visible TPC/SM IDs in V4 TMD/QMDs (Hopper+; compute capability 9.0). The mask bit indexes instead appear to correspond to on-chip-units, including disabled ones; i.e. the set of pre-SM-ID-remapping and pre-floorsweeping TPCs

## Important Limitations

1. Only supports partitioning _within_ a single GPU context.
   At time of writing, it is challenging to impossible to share a GPU context across multiple CPU address spaces.
   The implication is that your applications must first be combined together into a single CPU process.
2. No aspect of this system prevents implicit synchronization on the GPU.
   See prior work, particularly that of Amert et al. (perhaps the CUPiD^RT paper), for ways to avoid this.

## Porting Stream Masking to Newer CUDA Versions

Build the tests with `make tests`. And then run the following:
```
for (( i=0; $?!=0; i+=8 )); do MASK_OFF=$i ./libsmctrl_test_stream_mask; done
```

How this works:

1. If `MASK_OFF` is set, `libsmctrl` applies this as a byte offset to a base address for the location
   of the SM mask fields in CUDA's stream data structure.
  - That base address is the one for CUDA 12.2 at time of writing.
2. The stream masking test is run.
3. If the test succeeded (returned zero) the loop aborts, otherwise it increments the offset to attempt and repeats.

Once this loop aborts, take the found offset and add it into the switch statement for the appropriate CUDA version and CPU architecture.

If the loop hangs (e.g. at offset 40), terminate and restart the loop with `i` initialized past the offset that hung (e.g. at offset 48).