1 function tracer guts 2 ==================== 3 By Mike Frysinger 4 5Introduction 6------------ 7 8Here we will cover the architecture pieces that the common function tracing 9code relies on for proper functioning. Things are broken down into increasing 10complexity so that you can start simple and at least get basic functionality. 11 12Note that this focuses on architecture implementation details only. If you 13want more explanation of a feature in terms of common code, review the common 14ftrace.txt file. 15 16Ideally, everyone who wishes to retain performance while supporting tracing in 17their kernel should make it all the way to dynamic ftrace support. 18 19 20Prerequisites 21------------- 22 23Ftrace relies on these features being implemented: 24 STACKTRACE_SUPPORT - implement save_stack_trace() 25 TRACE_IRQFLAGS_SUPPORT - implement include/asm/irqflags.h 26 27 28HAVE_FUNCTION_TRACER 29-------------------- 30 31You will need to implement the mcount and the ftrace_stub functions. 32 33The exact mcount symbol name will depend on your toolchain. Some call it 34"mcount", "_mcount", or even "__mcount". You can probably figure it out by 35running something like: 36 $ echo 'main(){}' | gcc -x c -S -o - - -pg | grep mcount 37 call mcount 38We'll make the assumption below that the symbol is "mcount" just to keep things 39nice and simple in the examples. 40 41Keep in mind that the ABI that is in effect inside of the mcount function is 42*highly* architecture/toolchain specific. We cannot help you in this regard, 43sorry. Dig up some old documentation and/or find someone more familiar than 44you to bang ideas off of. Typically, register usage (argument/scratch/etc...) 45is a major issue at this point, especially in relation to the location of the 46mcount call (before/after function prologue). You might also want to look at 47how glibc has implemented the mcount function for your architecture. It might 48be (semi-)relevant. 49 50The mcount function should check the function pointer ftrace_trace_function 51to see if it is set to ftrace_stub. If it is, there is nothing for you to do, 52so return immediately. If it isn't, then call that function in the same way 53the mcount function normally calls __mcount_internal -- the first argument is 54the "frompc" while the second argument is the "selfpc" (adjusted to remove the 55size of the mcount call that is embedded in the function). 56 57For example, if the function foo() calls bar(), when the bar() function calls 58mcount(), the arguments mcount() will pass to the tracer are: 59 "frompc" - the address bar() will use to return to foo() 60 "selfpc" - the address bar() (with mcount() size adjustment) 61 62Also keep in mind that this mcount function will be called *a lot*, so 63optimizing for the default case of no tracer will help the smooth running of 64your system when tracing is disabled. So the start of the mcount function is 65typically the bare minimum with checking things before returning. That also 66means the code flow should usually be kept linear (i.e. no branching in the nop 67case). This is of course an optimization and not a hard requirement. 68 69Here is some pseudo code that should help (these functions should actually be 70implemented in assembly): 71 72void ftrace_stub(void) 73{ 74 return; 75} 76 77void mcount(void) 78{ 79 /* save any bare state needed in order to do initial checking */ 80 81 extern void (*ftrace_trace_function)(unsigned long, unsigned long); 82 if (ftrace_trace_function != ftrace_stub) 83 goto do_trace; 84 85 /* restore any bare state */ 86 87 return; 88 89do_trace: 90 91 /* save all state needed by the ABI (see paragraph above) */ 92 93 unsigned long frompc = ...; 94 unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE; 95 ftrace_trace_function(frompc, selfpc); 96 97 /* restore all state needed by the ABI */ 98} 99 100Don't forget to export mcount for modules ! 101extern void mcount(void); 102EXPORT_SYMBOL(mcount); 103 104 105HAVE_FUNCTION_TRACE_MCOUNT_TEST 106------------------------------- 107 108This is an optional optimization for the normal case when tracing is turned off 109in the system. If you do not enable this Kconfig option, the common ftrace 110code will take care of doing the checking for you. 111 112To support this feature, you only need to check the function_trace_stop 113variable in the mcount function. If it is non-zero, there is no tracing to be 114done at all, so you can return. 115 116This additional pseudo code would simply be: 117void mcount(void) 118{ 119 /* save any bare state needed in order to do initial checking */ 120 121+ if (function_trace_stop) 122+ return; 123 124 extern void (*ftrace_trace_function)(unsigned long, unsigned long); 125 if (ftrace_trace_function != ftrace_stub) 126... 127 128 129HAVE_FUNCTION_GRAPH_TRACER 130-------------------------- 131 132Deep breath ... time to do some real work. Here you will need to update the 133mcount function to check ftrace graph function pointers, as well as implement 134some functions to save (hijack) and restore the return address. 135 136The mcount function should check the function pointers ftrace_graph_return 137(compare to ftrace_stub) and ftrace_graph_entry (compare to 138ftrace_graph_entry_stub). If either of those is not set to the relevant stub 139function, call the arch-specific function ftrace_graph_caller which in turn 140calls the arch-specific function prepare_ftrace_return. Neither of these 141function names is strictly required, but you should use them anyway to stay 142consistent across the architecture ports -- easier to compare & contrast 143things. 144 145The arguments to prepare_ftrace_return are slightly different than what are 146passed to ftrace_trace_function. The second argument "selfpc" is the same, 147but the first argument should be a pointer to the "frompc". Typically this is 148located on the stack. This allows the function to hijack the return address 149temporarily to have it point to the arch-specific function return_to_handler. 150That function will simply call the common ftrace_return_to_handler function and 151that will return the original return address with which you can return to the 152original call site. 153 154Here is the updated mcount pseudo code: 155void mcount(void) 156{ 157... 158 if (ftrace_trace_function != ftrace_stub) 159 goto do_trace; 160 161+#ifdef CONFIG_FUNCTION_GRAPH_TRACER 162+ extern void (*ftrace_graph_return)(...); 163+ extern void (*ftrace_graph_entry)(...); 164+ if (ftrace_graph_return != ftrace_stub || 165+ ftrace_graph_entry != ftrace_graph_entry_stub) 166+ ftrace_graph_caller(); 167+#endif 168 169 /* restore any bare state */ 170... 171 172Here is the pseudo code for the new ftrace_graph_caller assembly function: 173#ifdef CONFIG_FUNCTION_GRAPH_TRACER 174void ftrace_graph_caller(void) 175{ 176 /* save all state needed by the ABI */ 177 178 unsigned long *frompc = &...; 179 unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE; 180 /* passing frame pointer up is optional -- see below */ 181 prepare_ftrace_return(frompc, selfpc, frame_pointer); 182 183 /* restore all state needed by the ABI */ 184} 185#endif 186 187For information on how to implement prepare_ftrace_return(), simply look at the 188x86 version (the frame pointer passing is optional; see the next section for 189more information). The only architecture-specific piece in it is the setup of 190the fault recovery table (the asm(...) code). The rest should be the same 191across architectures. 192 193Here is the pseudo code for the new return_to_handler assembly function. Note 194that the ABI that applies here is different from what applies to the mcount 195code. Since you are returning from a function (after the epilogue), you might 196be able to skimp on things saved/restored (usually just registers used to pass 197return values). 198 199#ifdef CONFIG_FUNCTION_GRAPH_TRACER 200void return_to_handler(void) 201{ 202 /* save all state needed by the ABI (see paragraph above) */ 203 204 void (*original_return_point)(void) = ftrace_return_to_handler(); 205 206 /* restore all state needed by the ABI */ 207 208 /* this is usually either a return or a jump */ 209 original_return_point(); 210} 211#endif 212 213 214HAVE_FUNCTION_GRAPH_FP_TEST 215--------------------------- 216 217An arch may pass in a unique value (frame pointer) to both the entering and 218exiting of a function. On exit, the value is compared and if it does not 219match, then it will panic the kernel. This is largely a sanity check for bad 220code generation with gcc. If gcc for your port sanely updates the frame 221pointer under different optimization levels, then ignore this option. 222 223However, adding support for it isn't terribly difficult. In your assembly code 224that calls prepare_ftrace_return(), pass the frame pointer as the 3rd argument. 225Then in the C version of that function, do what the x86 port does and pass it 226along to ftrace_push_return_trace() instead of a stub value of 0. 227 228Similarly, when you call ftrace_return_to_handler(), pass it the frame pointer. 229 230 231HAVE_FTRACE_NMI_ENTER 232--------------------- 233 234If you can't trace NMI functions, then skip this option. 235 236<details to be filled> 237 238 239HAVE_SYSCALL_TRACEPOINTS 240------------------------ 241 242You need very few things to get the syscalls tracing in an arch. 243 244- Support HAVE_ARCH_TRACEHOOK (see arch/Kconfig). 245- Have a NR_syscalls variable in <asm/unistd.h> that provides the number 246 of syscalls supported by the arch. 247- Support the TIF_SYSCALL_TRACEPOINT thread flags. 248- Put the trace_sys_enter() and trace_sys_exit() tracepoints calls from ptrace 249 in the ptrace syscalls tracing path. 250- Tag this arch as HAVE_SYSCALL_TRACEPOINTS. 251 252 253HAVE_FTRACE_MCOUNT_RECORD 254------------------------- 255 256See scripts/recordmcount.pl for more info. Just fill in the arch-specific 257details for how to locate the addresses of mcount call sites via objdump. 258This option doesn't make much sense without also implementing dynamic ftrace. 259 260 261HAVE_DYNAMIC_FTRACE 262------------------- 263 264You will first need HAVE_FTRACE_MCOUNT_RECORD and HAVE_FUNCTION_TRACER, so 265scroll your reader back up if you got over eager. 266 267Once those are out of the way, you will need to implement: 268 - asm/ftrace.h: 269 - MCOUNT_ADDR 270 - ftrace_call_adjust() 271 - struct dyn_arch_ftrace{} 272 - asm code: 273 - mcount() (new stub) 274 - ftrace_caller() 275 - ftrace_call() 276 - ftrace_stub() 277 - C code: 278 - ftrace_dyn_arch_init() 279 - ftrace_make_nop() 280 - ftrace_make_call() 281 - ftrace_update_ftrace_func() 282 283First you will need to fill out some arch details in your asm/ftrace.h. 284 285Define MCOUNT_ADDR as the address of your mcount symbol similar to: 286 #define MCOUNT_ADDR ((unsigned long)mcount) 287Since no one else will have a decl for that function, you will need to: 288 extern void mcount(void); 289 290You will also need the helper function ftrace_call_adjust(). Most people 291will be able to stub it out like so: 292 static inline unsigned long ftrace_call_adjust(unsigned long addr) 293 { 294 return addr; 295 } 296<details to be filled> 297 298Lastly you will need the custom dyn_arch_ftrace structure. If you need 299some extra state when runtime patching arbitrary call sites, this is the 300place. For now though, create an empty struct: 301 struct dyn_arch_ftrace { 302 /* No extra data needed */ 303 }; 304 305With the header out of the way, we can fill out the assembly code. While we 306did already create a mcount() function earlier, dynamic ftrace only wants a 307stub function. This is because the mcount() will only be used during boot 308and then all references to it will be patched out never to return. Instead, 309the guts of the old mcount() will be used to create a new ftrace_caller() 310function. Because the two are hard to merge, it will most likely be a lot 311easier to have two separate definitions split up by #ifdefs. Same goes for 312the ftrace_stub() as that will now be inlined in ftrace_caller(). 313 314Before we get confused anymore, let's check out some pseudo code so you can 315implement your own stuff in assembly: 316 317void mcount(void) 318{ 319 return; 320} 321 322void ftrace_caller(void) 323{ 324 /* implement HAVE_FUNCTION_TRACE_MCOUNT_TEST if you desire */ 325 326 /* save all state needed by the ABI (see paragraph above) */ 327 328 unsigned long frompc = ...; 329 unsigned long selfpc = <return address> - MCOUNT_INSN_SIZE; 330 331ftrace_call: 332 ftrace_stub(frompc, selfpc); 333 334 /* restore all state needed by the ABI */ 335 336ftrace_stub: 337 return; 338} 339 340This might look a little odd at first, but keep in mind that we will be runtime 341patching multiple things. First, only functions that we actually want to trace 342will be patched to call ftrace_caller(). Second, since we only have one tracer 343active at a time, we will patch the ftrace_caller() function itself to call the 344specific tracer in question. That is the point of the ftrace_call label. 345 346With that in mind, let's move on to the C code that will actually be doing the 347runtime patching. You'll need a little knowledge of your arch's opcodes in 348order to make it through the next section. 349 350Every arch has an init callback function. If you need to do something early on 351to initialize some state, this is the time to do that. Otherwise, this simple 352function below should be sufficient for most people: 353 354int __init ftrace_dyn_arch_init(void *data) 355{ 356 /* return value is done indirectly via data */ 357 *(unsigned long *)data = 0; 358 359 return 0; 360} 361 362There are two functions that are used to do runtime patching of arbitrary 363functions. The first is used to turn the mcount call site into a nop (which 364is what helps us retain runtime performance when not tracing). The second is 365used to turn the mcount call site into a call to an arbitrary location (but 366typically that is ftracer_caller()). See the general function definition in 367linux/ftrace.h for the functions: 368 ftrace_make_nop() 369 ftrace_make_call() 370The rec->ip value is the address of the mcount call site that was collected 371by the scripts/recordmcount.pl during build time. 372 373The last function is used to do runtime patching of the active tracer. This 374will be modifying the assembly code at the location of the ftrace_call symbol 375inside of the ftrace_caller() function. So you should have sufficient padding 376at that location to support the new function calls you'll be inserting. Some 377people will be using a "call" type instruction while others will be using a 378"branch" type instruction. Specifically, the function is: 379 ftrace_update_ftrace_func() 380 381 382HAVE_DYNAMIC_FTRACE + HAVE_FUNCTION_GRAPH_TRACER 383------------------------------------------------ 384 385The function grapher needs a few tweaks in order to work with dynamic ftrace. 386Basically, you will need to: 387 - update: 388 - ftrace_caller() 389 - ftrace_graph_call() 390 - ftrace_graph_caller() 391 - implement: 392 - ftrace_enable_ftrace_graph_caller() 393 - ftrace_disable_ftrace_graph_caller() 394 395<details to be filled> 396Quick notes: 397 - add a nop stub after the ftrace_call location named ftrace_graph_call; 398 stub needs to be large enough to support a call to ftrace_graph_caller() 399 - update ftrace_graph_caller() to work with being called by the new 400 ftrace_caller() since some semantics may have changed 401 - ftrace_enable_ftrace_graph_caller() will runtime patch the 402 ftrace_graph_call location with a call to ftrace_graph_caller() 403 - ftrace_disable_ftrace_graph_caller() will runtime patch the 404 ftrace_graph_call location with nops 405