May 15 00:56:59.831086 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed May 14 23:14:51 -00 2025 May 15 00:56:59.831108 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:56:59.831118 kernel: BIOS-provided physical RAM map: May 15 00:56:59.831124 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 00:56:59.831129 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 00:56:59.831135 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 00:56:59.831142 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable May 15 00:56:59.831147 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 00:56:59.831153 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 00:56:59.831159 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 00:56:59.831165 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable May 15 00:56:59.831170 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 00:56:59.831176 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 00:56:59.831182 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 00:56:59.831189 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 00:56:59.831196 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 00:56:59.831202 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 00:56:59.831207 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:56:59.831213 kernel: NX (Execute Disable) protection: active May 15 00:56:59.831219 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 00:56:59.831225 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable May 15 00:56:59.831240 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 00:56:59.831246 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable May 15 00:56:59.831251 kernel: extended physical RAM map: May 15 00:56:59.831257 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable May 15 00:56:59.831264 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable May 15 00:56:59.831271 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS May 15 00:56:59.831277 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable May 15 00:56:59.831282 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS May 15 00:56:59.831288 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable May 15 00:56:59.831294 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS May 15 00:56:59.831300 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable May 15 00:56:59.831306 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable May 15 00:56:59.831312 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable May 15 00:56:59.831317 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable May 15 00:56:59.831323 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable May 15 00:56:59.831330 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved May 15 00:56:59.831336 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data May 15 00:56:59.831342 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS May 15 00:56:59.831348 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable May 15 00:56:59.831356 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved May 15 00:56:59.831363 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS May 15 00:56:59.831369 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved May 15 00:56:59.831378 kernel: efi: EFI v2.70 by EDK II May 15 00:56:59.831385 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 May 15 00:56:59.831392 kernel: random: crng init done May 15 00:56:59.831400 kernel: SMBIOS 2.8 present. May 15 00:56:59.831407 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 May 15 00:56:59.831413 kernel: Hypervisor detected: KVM May 15 00:56:59.831420 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 15 00:56:59.831426 kernel: kvm-clock: cpu 0, msr d196001, primary cpu clock May 15 00:56:59.831432 kernel: kvm-clock: using sched offset of 4746906071 cycles May 15 00:56:59.831441 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 15 00:56:59.831447 kernel: tsc: Detected 2794.746 MHz processor May 15 00:56:59.831454 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 15 00:56:59.831461 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 15 00:56:59.831468 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 May 15 00:56:59.831474 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 15 00:56:59.831481 kernel: Using GB pages for direct mapping May 15 00:56:59.831487 kernel: Secure boot disabled May 15 00:56:59.831493 kernel: ACPI: Early table checksum verification disabled May 15 00:56:59.831501 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) May 15 00:56:59.831507 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) May 15 00:56:59.831514 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:56:59.831520 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:56:59.831527 kernel: ACPI: FACS 0x000000009CBDD000 000040 May 15 00:56:59.831533 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:56:59.831540 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:56:59.831547 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:56:59.831553 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 00:56:59.831560 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) May 15 00:56:59.831567 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] May 15 00:56:59.831574 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] May 15 00:56:59.831581 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] May 15 00:56:59.831587 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] May 15 00:56:59.831593 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] May 15 00:56:59.831600 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] May 15 00:56:59.831606 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] May 15 00:56:59.831613 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] May 15 00:56:59.831620 kernel: No NUMA configuration found May 15 00:56:59.831626 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] May 15 00:56:59.831633 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] May 15 00:56:59.831639 kernel: Zone ranges: May 15 00:56:59.831646 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 15 00:56:59.831652 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] May 15 00:56:59.831659 kernel: Normal empty May 15 00:56:59.831665 kernel: Movable zone start for each node May 15 00:56:59.831671 kernel: Early memory node ranges May 15 00:56:59.831679 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 15 00:56:59.831686 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] May 15 00:56:59.831692 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] May 15 00:56:59.831698 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] May 15 00:56:59.831705 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] May 15 00:56:59.831711 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] May 15 00:56:59.831718 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] May 15 00:56:59.831724 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:56:59.831731 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 15 00:56:59.831737 kernel: On node 0, zone DMA: 8 pages in unavailable ranges May 15 00:56:59.831745 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 15 00:56:59.831751 kernel: On node 0, zone DMA: 240 pages in unavailable ranges May 15 00:56:59.831758 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges May 15 00:56:59.831764 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges May 15 00:56:59.831771 kernel: ACPI: PM-Timer IO Port: 0x608 May 15 00:56:59.831777 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 15 00:56:59.831784 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 15 00:56:59.831790 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 15 00:56:59.831796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 15 00:56:59.831804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 15 00:56:59.831811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 15 00:56:59.831817 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 15 00:56:59.831823 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 15 00:56:59.831830 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 15 00:56:59.831836 kernel: TSC deadline timer available May 15 00:56:59.831843 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs May 15 00:56:59.831849 kernel: kvm-guest: KVM setup pv remote TLB flush May 15 00:56:59.831856 kernel: kvm-guest: setup PV sched yield May 15 00:56:59.831863 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices May 15 00:56:59.831870 kernel: Booting paravirtualized kernel on KVM May 15 00:56:59.831881 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 15 00:56:59.831890 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 May 15 00:56:59.831896 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 May 15 00:56:59.831903 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 May 15 00:56:59.831910 kernel: pcpu-alloc: [0] 0 1 2 3 May 15 00:56:59.831916 kernel: kvm-guest: setup async PF for cpu 0 May 15 00:56:59.831923 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 May 15 00:56:59.831930 kernel: kvm-guest: PV spinlocks enabled May 15 00:56:59.831937 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 15 00:56:59.831944 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 May 15 00:56:59.831963 kernel: Policy zone: DMA32 May 15 00:56:59.831971 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:56:59.831978 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 00:56:59.831985 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 00:56:59.831993 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 00:56:59.832000 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 00:56:59.832008 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47456K init, 4124K bss, 169308K reserved, 0K cma-reserved) May 15 00:56:59.832015 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 00:56:59.832021 kernel: ftrace: allocating 34584 entries in 136 pages May 15 00:56:59.832028 kernel: ftrace: allocated 136 pages with 2 groups May 15 00:56:59.832035 kernel: rcu: Hierarchical RCU implementation. May 15 00:56:59.832043 kernel: rcu: RCU event tracing is enabled. May 15 00:56:59.832050 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 00:56:59.832058 kernel: Rude variant of Tasks RCU enabled. May 15 00:56:59.832065 kernel: Tracing variant of Tasks RCU enabled. May 15 00:56:59.832072 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 00:56:59.832079 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 00:56:59.832085 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 May 15 00:56:59.832092 kernel: Console: colour dummy device 80x25 May 15 00:56:59.832099 kernel: printk: console [ttyS0] enabled May 15 00:56:59.832106 kernel: ACPI: Core revision 20210730 May 15 00:56:59.832113 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 15 00:56:59.832121 kernel: APIC: Switch to symmetric I/O mode setup May 15 00:56:59.832128 kernel: x2apic enabled May 15 00:56:59.832135 kernel: Switched APIC routing to physical x2apic. May 15 00:56:59.832142 kernel: kvm-guest: setup PV IPIs May 15 00:56:59.832148 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 15 00:56:59.832155 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized May 15 00:56:59.832162 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) May 15 00:56:59.832182 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated May 15 00:56:59.832189 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 May 15 00:56:59.832197 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 May 15 00:56:59.832204 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 15 00:56:59.832211 kernel: Spectre V2 : Mitigation: Retpolines May 15 00:56:59.832218 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 15 00:56:59.832224 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls May 15 00:56:59.832238 kernel: RETBleed: Mitigation: untrained return thunk May 15 00:56:59.832245 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 15 00:56:59.832252 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp May 15 00:56:59.832259 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 15 00:56:59.832267 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 15 00:56:59.832274 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 15 00:56:59.832282 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 15 00:56:59.832289 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 15 00:56:59.832295 kernel: Freeing SMP alternatives memory: 32K May 15 00:56:59.832302 kernel: pid_max: default: 32768 minimum: 301 May 15 00:56:59.832309 kernel: LSM: Security Framework initializing May 15 00:56:59.832316 kernel: SELinux: Initializing. May 15 00:56:59.832322 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:56:59.832331 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 00:56:59.832338 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) May 15 00:56:59.832344 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. May 15 00:56:59.832351 kernel: ... version: 0 May 15 00:56:59.832358 kernel: ... bit width: 48 May 15 00:56:59.832387 kernel: ... generic registers: 6 May 15 00:56:59.832395 kernel: ... value mask: 0000ffffffffffff May 15 00:56:59.832403 kernel: ... max period: 00007fffffffffff May 15 00:56:59.832410 kernel: ... fixed-purpose events: 0 May 15 00:56:59.832418 kernel: ... event mask: 000000000000003f May 15 00:56:59.832425 kernel: signal: max sigframe size: 1776 May 15 00:56:59.832431 kernel: rcu: Hierarchical SRCU implementation. May 15 00:56:59.832438 kernel: smp: Bringing up secondary CPUs ... May 15 00:56:59.832445 kernel: x86: Booting SMP configuration: May 15 00:56:59.832452 kernel: .... node #0, CPUs: #1 May 15 00:56:59.832458 kernel: kvm-clock: cpu 1, msr d196041, secondary cpu clock May 15 00:56:59.832465 kernel: kvm-guest: setup async PF for cpu 1 May 15 00:56:59.832472 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 May 15 00:56:59.832480 kernel: #2 May 15 00:56:59.832487 kernel: kvm-clock: cpu 2, msr d196081, secondary cpu clock May 15 00:56:59.832494 kernel: kvm-guest: setup async PF for cpu 2 May 15 00:56:59.832500 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 May 15 00:56:59.832507 kernel: #3 May 15 00:56:59.832514 kernel: kvm-clock: cpu 3, msr d1960c1, secondary cpu clock May 15 00:56:59.832520 kernel: kvm-guest: setup async PF for cpu 3 May 15 00:56:59.832527 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 May 15 00:56:59.832534 kernel: smp: Brought up 1 node, 4 CPUs May 15 00:56:59.832540 kernel: smpboot: Max logical packages: 1 May 15 00:56:59.832548 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) May 15 00:56:59.832555 kernel: devtmpfs: initialized May 15 00:56:59.832562 kernel: x86/mm: Memory block size: 128MB May 15 00:56:59.832569 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) May 15 00:56:59.832576 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) May 15 00:56:59.832583 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) May 15 00:56:59.832589 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) May 15 00:56:59.832596 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) May 15 00:56:59.832603 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 00:56:59.832611 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 00:56:59.832618 kernel: pinctrl core: initialized pinctrl subsystem May 15 00:56:59.832625 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 00:56:59.832632 kernel: audit: initializing netlink subsys (disabled) May 15 00:56:59.832639 kernel: audit: type=2000 audit(1747270619.384:1): state=initialized audit_enabled=0 res=1 May 15 00:56:59.832645 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 00:56:59.832652 kernel: thermal_sys: Registered thermal governor 'user_space' May 15 00:56:59.832659 kernel: cpuidle: using governor menu May 15 00:56:59.832667 kernel: ACPI: bus type PCI registered May 15 00:56:59.832674 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 00:56:59.832680 kernel: dca service started, version 1.12.1 May 15 00:56:59.832687 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) May 15 00:56:59.832694 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 May 15 00:56:59.832701 kernel: PCI: Using configuration type 1 for base access May 15 00:56:59.832708 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 15 00:56:59.832715 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 00:56:59.832721 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 00:56:59.832729 kernel: ACPI: Added _OSI(Module Device) May 15 00:56:59.832736 kernel: ACPI: Added _OSI(Processor Device) May 15 00:56:59.832743 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 00:56:59.832750 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 00:56:59.832756 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 00:56:59.832763 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 00:56:59.832770 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 00:56:59.832777 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 00:56:59.832784 kernel: ACPI: Interpreter enabled May 15 00:56:59.832790 kernel: ACPI: PM: (supports S0 S3 S5) May 15 00:56:59.832798 kernel: ACPI: Using IOAPIC for interrupt routing May 15 00:56:59.832805 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 15 00:56:59.832812 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F May 15 00:56:59.832819 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 00:56:59.832984 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 00:56:59.833060 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] May 15 00:56:59.833127 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] May 15 00:56:59.833139 kernel: PCI host bridge to bus 0000:00 May 15 00:56:59.833214 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 15 00:56:59.833287 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 15 00:56:59.833350 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 15 00:56:59.833411 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] May 15 00:56:59.833472 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] May 15 00:56:59.833532 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] May 15 00:56:59.833596 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 00:56:59.833682 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 May 15 00:56:59.833760 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 May 15 00:56:59.833829 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] May 15 00:56:59.833897 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] May 15 00:56:59.833996 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] May 15 00:56:59.834069 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb May 15 00:56:59.834141 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 15 00:56:59.834219 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 May 15 00:56:59.834305 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] May 15 00:56:59.834376 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] May 15 00:56:59.834449 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] May 15 00:56:59.834528 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 May 15 00:56:59.834650 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] May 15 00:56:59.834722 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] May 15 00:56:59.834793 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] May 15 00:56:59.834869 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 15 00:56:59.834938 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] May 15 00:56:59.835059 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] May 15 00:56:59.835128 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] May 15 00:56:59.835198 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] May 15 00:56:59.835281 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 May 15 00:56:59.835349 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO May 15 00:56:59.835425 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 May 15 00:56:59.835494 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] May 15 00:56:59.835564 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] May 15 00:56:59.835649 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 May 15 00:56:59.835721 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] May 15 00:56:59.835730 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 15 00:56:59.835738 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 15 00:56:59.835745 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 15 00:56:59.835751 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 15 00:56:59.835758 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 May 15 00:56:59.835765 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 May 15 00:56:59.835772 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 May 15 00:56:59.835781 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 May 15 00:56:59.835788 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 May 15 00:56:59.835795 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 May 15 00:56:59.835802 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 May 15 00:56:59.835809 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 May 15 00:56:59.835815 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 May 15 00:56:59.835822 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 May 15 00:56:59.835829 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 May 15 00:56:59.835836 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 May 15 00:56:59.835844 kernel: iommu: Default domain type: Translated May 15 00:56:59.835851 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 15 00:56:59.835917 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device May 15 00:56:59.835999 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 15 00:56:59.836076 kernel: pci 0000:00:01.0: vgaarb: bridge control possible May 15 00:56:59.836085 kernel: vgaarb: loaded May 15 00:56:59.836092 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 00:56:59.836099 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 00:56:59.836106 kernel: PTP clock support registered May 15 00:56:59.836116 kernel: Registered efivars operations May 15 00:56:59.836123 kernel: PCI: Using ACPI for IRQ routing May 15 00:56:59.836130 kernel: PCI: pci_cache_line_size set to 64 bytes May 15 00:56:59.836137 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] May 15 00:56:59.836144 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] May 15 00:56:59.836150 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] May 15 00:56:59.836157 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] May 15 00:56:59.836164 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] May 15 00:56:59.836171 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] May 15 00:56:59.836179 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 15 00:56:59.836186 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 15 00:56:59.836192 kernel: clocksource: Switched to clocksource kvm-clock May 15 00:56:59.836199 kernel: VFS: Disk quotas dquot_6.6.0 May 15 00:56:59.836206 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 00:56:59.836213 kernel: pnp: PnP ACPI init May 15 00:56:59.836300 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved May 15 00:56:59.836313 kernel: pnp: PnP ACPI: found 6 devices May 15 00:56:59.836320 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 15 00:56:59.836327 kernel: NET: Registered PF_INET protocol family May 15 00:56:59.836334 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 00:56:59.836341 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 00:56:59.836348 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 00:56:59.836355 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 00:56:59.836362 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 00:56:59.836369 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 00:56:59.836377 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:56:59.836384 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 00:56:59.836391 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 00:56:59.836398 kernel: NET: Registered PF_XDP protocol family May 15 00:56:59.836467 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window May 15 00:56:59.836538 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] May 15 00:56:59.836601 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 15 00:56:59.836663 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 15 00:56:59.836728 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 15 00:56:59.836789 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] May 15 00:56:59.836850 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] May 15 00:56:59.836911 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] May 15 00:56:59.836921 kernel: PCI: CLS 0 bytes, default 64 May 15 00:56:59.836934 kernel: Initialise system trusted keyrings May 15 00:56:59.836941 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 00:56:59.836948 kernel: Key type asymmetric registered May 15 00:56:59.836966 kernel: Asymmetric key parser 'x509' registered May 15 00:56:59.836975 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 00:56:59.836982 kernel: io scheduler mq-deadline registered May 15 00:56:59.836997 kernel: io scheduler kyber registered May 15 00:56:59.837006 kernel: io scheduler bfq registered May 15 00:56:59.837013 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 15 00:56:59.837020 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 May 15 00:56:59.837028 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 May 15 00:56:59.837035 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 May 15 00:56:59.837042 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 00:56:59.837050 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 15 00:56:59.837058 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 15 00:56:59.837065 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 15 00:56:59.837072 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 15 00:56:59.837080 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 15 00:56:59.837154 kernel: rtc_cmos 00:04: RTC can wake from S4 May 15 00:56:59.837219 kernel: rtc_cmos 00:04: registered as rtc0 May 15 00:56:59.837292 kernel: rtc_cmos 00:04: setting system clock to 2025-05-15T00:56:59 UTC (1747270619) May 15 00:56:59.837360 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs May 15 00:56:59.837369 kernel: efifb: probing for efifb May 15 00:56:59.837377 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k May 15 00:56:59.837384 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 May 15 00:56:59.837391 kernel: efifb: scrolling: redraw May 15 00:56:59.837399 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 15 00:56:59.837406 kernel: Console: switching to colour frame buffer device 160x50 May 15 00:56:59.837413 kernel: fb0: EFI VGA frame buffer device May 15 00:56:59.837420 kernel: pstore: Registered efi as persistent store backend May 15 00:56:59.837429 kernel: NET: Registered PF_INET6 protocol family May 15 00:56:59.837436 kernel: Segment Routing with IPv6 May 15 00:56:59.837443 kernel: In-situ OAM (IOAM) with IPv6 May 15 00:56:59.837452 kernel: NET: Registered PF_PACKET protocol family May 15 00:56:59.837459 kernel: Key type dns_resolver registered May 15 00:56:59.837466 kernel: IPI shorthand broadcast: enabled May 15 00:56:59.837474 kernel: sched_clock: Marking stable (453527609, 126074435)->(596011112, -16409068) May 15 00:56:59.837482 kernel: registered taskstats version 1 May 15 00:56:59.837489 kernel: Loading compiled-in X.509 certificates May 15 00:56:59.837496 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: a3400373b5c34ccb74f940604f224840f2b40bdd' May 15 00:56:59.837504 kernel: Key type .fscrypt registered May 15 00:56:59.837511 kernel: Key type fscrypt-provisioning registered May 15 00:56:59.837518 kernel: pstore: Using crash dump compression: deflate May 15 00:56:59.837526 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 00:56:59.837534 kernel: ima: Allocated hash algorithm: sha1 May 15 00:56:59.837541 kernel: ima: No architecture policies found May 15 00:56:59.837548 kernel: clk: Disabling unused clocks May 15 00:56:59.837555 kernel: Freeing unused kernel image (initmem) memory: 47456K May 15 00:56:59.837563 kernel: Write protecting the kernel read-only data: 28672k May 15 00:56:59.837570 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 15 00:56:59.837577 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 15 00:56:59.837585 kernel: Run /init as init process May 15 00:56:59.837592 kernel: with arguments: May 15 00:56:59.837600 kernel: /init May 15 00:56:59.837607 kernel: with environment: May 15 00:56:59.837614 kernel: HOME=/ May 15 00:56:59.837621 kernel: TERM=linux May 15 00:56:59.837628 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 00:56:59.837637 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:56:59.837647 systemd[1]: Detected virtualization kvm. May 15 00:56:59.837655 systemd[1]: Detected architecture x86-64. May 15 00:56:59.837664 systemd[1]: Running in initrd. May 15 00:56:59.837672 systemd[1]: No hostname configured, using default hostname. May 15 00:56:59.837679 systemd[1]: Hostname set to . May 15 00:56:59.837688 systemd[1]: Initializing machine ID from VM UUID. May 15 00:56:59.837695 systemd[1]: Queued start job for default target initrd.target. May 15 00:56:59.837703 systemd[1]: Started systemd-ask-password-console.path. May 15 00:56:59.837711 systemd[1]: Reached target cryptsetup.target. May 15 00:56:59.837719 systemd[1]: Reached target paths.target. May 15 00:56:59.837726 systemd[1]: Reached target slices.target. May 15 00:56:59.837735 systemd[1]: Reached target swap.target. May 15 00:56:59.837743 systemd[1]: Reached target timers.target. May 15 00:56:59.837751 systemd[1]: Listening on iscsid.socket. May 15 00:56:59.837759 systemd[1]: Listening on iscsiuio.socket. May 15 00:56:59.837767 systemd[1]: Listening on systemd-journald-audit.socket. May 15 00:56:59.837775 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 00:56:59.837783 systemd[1]: Listening on systemd-journald.socket. May 15 00:56:59.837792 systemd[1]: Listening on systemd-networkd.socket. May 15 00:56:59.837800 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:56:59.837808 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:56:59.837816 systemd[1]: Reached target sockets.target. May 15 00:56:59.837823 systemd[1]: Starting kmod-static-nodes.service... May 15 00:56:59.837831 systemd[1]: Finished network-cleanup.service. May 15 00:56:59.837839 systemd[1]: Starting systemd-fsck-usr.service... May 15 00:56:59.837847 systemd[1]: Starting systemd-journald.service... May 15 00:56:59.837855 systemd[1]: Starting systemd-modules-load.service... May 15 00:56:59.837864 systemd[1]: Starting systemd-resolved.service... May 15 00:56:59.837872 systemd[1]: Starting systemd-vconsole-setup.service... May 15 00:56:59.837895 systemd[1]: Finished kmod-static-nodes.service. May 15 00:56:59.837903 systemd[1]: Finished systemd-fsck-usr.service. May 15 00:56:59.837911 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 00:56:59.837919 systemd[1]: Finished systemd-vconsole-setup.service. May 15 00:56:59.837927 kernel: audit: type=1130 audit(1747270619.833:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.837935 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 00:56:59.837948 systemd-journald[198]: Journal started May 15 00:56:59.837998 systemd-journald[198]: Runtime Journal (/run/log/journal/768d8987db4f4db9be87503512234687) is 6.0M, max 48.4M, 42.4M free. May 15 00:56:59.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.843972 kernel: audit: type=1130 audit(1747270619.838:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.843990 systemd[1]: Started systemd-journald.service. May 15 00:56:59.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.844975 kernel: audit: type=1130 audit(1747270619.843:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.845661 systemd[1]: Starting dracut-cmdline-ask.service... May 15 00:56:59.848008 systemd-modules-load[199]: Inserted module 'overlay' May 15 00:56:59.861344 systemd-resolved[200]: Positive Trust Anchors: May 15 00:56:59.861360 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:56:59.861387 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:56:59.863596 systemd-resolved[200]: Defaulting to hostname 'linux'. May 15 00:56:59.864497 systemd[1]: Started systemd-resolved.service. May 15 00:56:59.864735 systemd[1]: Reached target nss-lookup.target. May 15 00:56:59.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.868970 kernel: audit: type=1130 audit(1747270619.863:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.877245 systemd[1]: Finished dracut-cmdline-ask.service. May 15 00:56:59.882634 kernel: audit: type=1130 audit(1747270619.877:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.881780 systemd[1]: Starting dracut-cmdline.service... May 15 00:56:59.886977 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 00:56:59.888747 dracut-cmdline[215]: dracut-dracut-053 May 15 00:56:59.890505 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bd2e5c4f6706621ae2eebb207adba6951c52e019661e3e87d19fb6c7284acf54 May 15 00:56:59.895694 kernel: Bridge firewalling registered May 15 00:56:59.891112 systemd-modules-load[199]: Inserted module 'br_netfilter' May 15 00:56:59.911980 kernel: SCSI subsystem initialized May 15 00:56:59.922418 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 00:56:59.922443 kernel: device-mapper: uevent: version 1.0.3 May 15 00:56:59.923675 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 00:56:59.926354 systemd-modules-load[199]: Inserted module 'dm_multipath' May 15 00:56:59.927086 systemd[1]: Finished systemd-modules-load.service. May 15 00:56:59.932247 kernel: audit: type=1130 audit(1747270619.926:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.928043 systemd[1]: Starting systemd-sysctl.service... May 15 00:56:59.937646 systemd[1]: Finished systemd-sysctl.service. May 15 00:56:59.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.942998 kernel: audit: type=1130 audit(1747270619.936:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:56:59.947976 kernel: Loading iSCSI transport class v2.0-870. May 15 00:56:59.963988 kernel: iscsi: registered transport (tcp) May 15 00:56:59.984983 kernel: iscsi: registered transport (qla4xxx) May 15 00:56:59.985005 kernel: QLogic iSCSI HBA Driver May 15 00:57:00.009644 systemd[1]: Finished dracut-cmdline.service. May 15 00:57:00.030441 kernel: audit: type=1130 audit(1747270620.025:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:00.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:00.026899 systemd[1]: Starting dracut-pre-udev.service... May 15 00:57:00.072987 kernel: raid6: avx2x4 gen() 29540 MB/s May 15 00:57:00.089980 kernel: raid6: avx2x4 xor() 7070 MB/s May 15 00:57:00.106979 kernel: raid6: avx2x2 gen() 31596 MB/s May 15 00:57:00.123980 kernel: raid6: avx2x2 xor() 19058 MB/s May 15 00:57:00.140978 kernel: raid6: avx2x1 gen() 26150 MB/s May 15 00:57:00.157985 kernel: raid6: avx2x1 xor() 14644 MB/s May 15 00:57:00.174982 kernel: raid6: sse2x4 gen() 14591 MB/s May 15 00:57:00.191977 kernel: raid6: sse2x4 xor() 6988 MB/s May 15 00:57:00.208994 kernel: raid6: sse2x2 gen() 15912 MB/s May 15 00:57:00.246011 kernel: raid6: sse2x2 xor() 9662 MB/s May 15 00:57:00.262979 kernel: raid6: sse2x1 gen() 11890 MB/s May 15 00:57:00.280407 kernel: raid6: sse2x1 xor() 7721 MB/s May 15 00:57:00.280437 kernel: raid6: using algorithm avx2x2 gen() 31596 MB/s May 15 00:57:00.280450 kernel: raid6: .... xor() 19058 MB/s, rmw enabled May 15 00:57:00.281156 kernel: raid6: using avx2x2 recovery algorithm May 15 00:57:00.292979 kernel: xor: automatically using best checksumming function avx May 15 00:57:00.381985 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 15 00:57:00.389814 systemd[1]: Finished dracut-pre-udev.service. May 15 00:57:00.394486 kernel: audit: type=1130 audit(1747270620.388:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:00.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:00.393000 audit: BPF prog-id=7 op=LOAD May 15 00:57:00.393000 audit: BPF prog-id=8 op=LOAD May 15 00:57:00.394916 systemd[1]: Starting systemd-udevd.service... May 15 00:57:00.410401 systemd-udevd[401]: Using default interface naming scheme 'v252'. May 15 00:57:00.416331 systemd[1]: Started systemd-udevd.service. May 15 00:57:00.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:00.433557 systemd[1]: Starting dracut-pre-trigger.service... May 15 00:57:00.443126 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation May 15 00:57:00.468926 systemd[1]: Finished dracut-pre-trigger.service. May 15 00:57:00.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:00.470749 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:57:00.501842 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:57:00.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:00.535506 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 00:57:00.549453 kernel: cryptd: max_cpu_qlen set to 1000 May 15 00:57:00.549466 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 00:57:00.549475 kernel: GPT:9289727 != 19775487 May 15 00:57:00.549484 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 00:57:00.549493 kernel: GPT:9289727 != 19775487 May 15 00:57:00.549501 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 00:57:00.549510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:57:00.555980 kernel: libata version 3.00 loaded. May 15 00:57:00.569199 kernel: AVX2 version of gcm_enc/dec engaged. May 15 00:57:00.569263 kernel: AES CTR mode by8 optimization enabled May 15 00:57:00.579598 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) May 15 00:57:00.579621 kernel: ahci 0000:00:1f.2: version 3.0 May 15 00:57:00.600810 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 May 15 00:57:00.600824 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode May 15 00:57:00.600910 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only May 15 00:57:00.600999 kernel: scsi host0: ahci May 15 00:57:00.601095 kernel: scsi host1: ahci May 15 00:57:00.601177 kernel: scsi host2: ahci May 15 00:57:00.601269 kernel: scsi host3: ahci May 15 00:57:00.601369 kernel: scsi host4: ahci May 15 00:57:00.601466 kernel: scsi host5: ahci May 15 00:57:00.601561 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 May 15 00:57:00.601573 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 May 15 00:57:00.601582 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 May 15 00:57:00.601591 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 May 15 00:57:00.601600 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 May 15 00:57:00.601610 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 May 15 00:57:00.584325 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 00:57:00.585808 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 00:57:00.590701 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 00:57:00.596672 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 00:57:00.604784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:57:00.611414 systemd[1]: Starting disk-uuid.service... May 15 00:57:00.618158 disk-uuid[524]: Primary Header is updated. May 15 00:57:00.618158 disk-uuid[524]: Secondary Entries is updated. May 15 00:57:00.618158 disk-uuid[524]: Secondary Header is updated. May 15 00:57:00.621539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:57:00.629098 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:57:00.915450 kernel: ata6: SATA link down (SStatus 0 SControl 300) May 15 00:57:00.915506 kernel: ata2: SATA link down (SStatus 0 SControl 300) May 15 00:57:00.915516 kernel: ata5: SATA link down (SStatus 0 SControl 300) May 15 00:57:00.915525 kernel: ata4: SATA link down (SStatus 0 SControl 300) May 15 00:57:00.916979 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) May 15 00:57:00.917977 kernel: ata1: SATA link down (SStatus 0 SControl 300) May 15 00:57:00.918987 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 May 15 00:57:00.920528 kernel: ata3.00: applying bridge limits May 15 00:57:00.920546 kernel: ata3.00: configured for UDMA/100 May 15 00:57:00.920979 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 May 15 00:57:00.954028 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray May 15 00:57:00.971554 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 15 00:57:00.971568 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 May 15 00:57:01.629388 disk-uuid[526]: The operation has completed successfully. May 15 00:57:01.630721 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 00:57:01.654869 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 00:57:01.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.654945 systemd[1]: Finished disk-uuid.service. May 15 00:57:01.659375 systemd[1]: Starting verity-setup.service... May 15 00:57:01.670976 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" May 15 00:57:01.687116 systemd[1]: Found device dev-mapper-usr.device. May 15 00:57:01.688879 systemd[1]: Finished verity-setup.service. May 15 00:57:01.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.690897 systemd[1]: Mounting sysusr-usr.mount... May 15 00:57:01.746880 systemd[1]: Mounted sysusr-usr.mount. May 15 00:57:01.748294 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 00:57:01.747097 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 00:57:01.748312 systemd[1]: Starting ignition-setup.service... May 15 00:57:01.750783 systemd[1]: Starting parse-ip-for-networkd.service... May 15 00:57:01.760354 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:57:01.760395 kernel: BTRFS info (device vda6): using free space tree May 15 00:57:01.760405 kernel: BTRFS info (device vda6): has skinny extents May 15 00:57:01.767321 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 00:57:01.774747 systemd[1]: Finished ignition-setup.service. May 15 00:57:01.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.776231 systemd[1]: Starting ignition-fetch-offline.service... May 15 00:57:01.811596 systemd[1]: Finished parse-ip-for-networkd.service. May 15 00:57:01.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.813000 audit: BPF prog-id=9 op=LOAD May 15 00:57:01.814038 systemd[1]: Starting systemd-networkd.service... May 15 00:57:01.816180 ignition[649]: Ignition 2.14.0 May 15 00:57:01.816187 ignition[649]: Stage: fetch-offline May 15 00:57:01.816255 ignition[649]: no configs at "/usr/lib/ignition/base.d" May 15 00:57:01.816266 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:57:01.816362 ignition[649]: parsed url from cmdline: "" May 15 00:57:01.816365 ignition[649]: no config URL provided May 15 00:57:01.816369 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" May 15 00:57:01.816375 ignition[649]: no config at "/usr/lib/ignition/user.ign" May 15 00:57:01.816391 ignition[649]: op(1): [started] loading QEMU firmware config module May 15 00:57:01.816395 ignition[649]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 00:57:01.825965 ignition[649]: op(1): [finished] loading QEMU firmware config module May 15 00:57:01.836558 systemd-networkd[720]: lo: Link UP May 15 00:57:01.836569 systemd-networkd[720]: lo: Gained carrier May 15 00:57:01.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.837350 systemd-networkd[720]: Enumeration completed May 15 00:57:01.837460 systemd[1]: Started systemd-networkd.service. May 15 00:57:01.837844 systemd-networkd[720]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:57:01.838772 systemd[1]: Reached target network.target. May 15 00:57:01.840442 systemd-networkd[720]: eth0: Link UP May 15 00:57:01.840446 systemd-networkd[720]: eth0: Gained carrier May 15 00:57:01.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.841359 systemd[1]: Starting iscsiuio.service... May 15 00:57:01.846454 systemd[1]: Started iscsiuio.service. May 15 00:57:01.850137 systemd[1]: Starting iscsid.service... May 15 00:57:01.854100 iscsid[727]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 00:57:01.854100 iscsid[727]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 00:57:01.854100 iscsid[727]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 00:57:01.854100 iscsid[727]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 00:57:01.854100 iscsid[727]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 00:57:01.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.865285 iscsid[727]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 00:57:01.855627 systemd[1]: Started iscsid.service. May 15 00:57:01.868246 systemd[1]: Starting dracut-initqueue.service... May 15 00:57:01.877504 systemd[1]: Finished dracut-initqueue.service. May 15 00:57:01.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.879295 systemd[1]: Reached target remote-fs-pre.target. May 15 00:57:01.880268 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:57:01.882897 systemd[1]: Reached target remote-fs.target. May 15 00:57:01.885095 systemd[1]: Starting dracut-pre-mount.service... May 15 00:57:01.890094 ignition[649]: parsing config with SHA512: 77b9a89675edfd0a787a63f65b5cdc22fcf4e50c7c300004c8312176989dc1c00f513fd90497061f6ec743eb1ef06925152668f278edf08ecd864f9f861ac1b2 May 15 00:57:01.893656 systemd[1]: Finished dracut-pre-mount.service. May 15 00:57:01.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.897743 unknown[649]: fetched base config from "system" May 15 00:57:01.897753 unknown[649]: fetched user config from "qemu" May 15 00:57:01.898168 ignition[649]: fetch-offline: fetch-offline passed May 15 00:57:01.898228 ignition[649]: Ignition finished successfully May 15 00:57:01.901033 systemd-networkd[720]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:57:01.903229 systemd[1]: Finished ignition-fetch-offline.service. May 15 00:57:01.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.903449 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 00:57:01.904937 systemd[1]: Starting ignition-kargs.service... May 15 00:57:01.916046 ignition[741]: Ignition 2.14.0 May 15 00:57:01.916055 ignition[741]: Stage: kargs May 15 00:57:01.916136 ignition[741]: no configs at "/usr/lib/ignition/base.d" May 15 00:57:01.916145 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:57:01.917205 ignition[741]: kargs: kargs passed May 15 00:57:01.917237 ignition[741]: Ignition finished successfully May 15 00:57:01.921431 systemd[1]: Finished ignition-kargs.service. May 15 00:57:01.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.923066 systemd[1]: Starting ignition-disks.service... May 15 00:57:01.929533 ignition[747]: Ignition 2.14.0 May 15 00:57:01.929542 ignition[747]: Stage: disks May 15 00:57:01.929632 ignition[747]: no configs at "/usr/lib/ignition/base.d" May 15 00:57:01.929641 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:57:01.930597 ignition[747]: disks: disks passed May 15 00:57:01.930640 ignition[747]: Ignition finished successfully May 15 00:57:01.934801 systemd[1]: Finished ignition-disks.service. May 15 00:57:01.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:01.936376 systemd[1]: Reached target initrd-root-device.target. May 15 00:57:01.936438 systemd[1]: Reached target local-fs-pre.target. May 15 00:57:01.939617 systemd[1]: Reached target local-fs.target. May 15 00:57:01.941107 systemd[1]: Reached target sysinit.target. May 15 00:57:01.942600 systemd[1]: Reached target basic.target. May 15 00:57:01.944774 systemd[1]: Starting systemd-fsck-root.service... May 15 00:57:01.983516 systemd-fsck[755]: ROOT: clean, 619/553520 files, 56023/553472 blocks May 15 00:57:02.117278 systemd[1]: Finished systemd-fsck-root.service. May 15 00:57:02.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:02.120099 systemd[1]: Mounting sysroot.mount... May 15 00:57:02.147977 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 00:57:02.148118 systemd[1]: Mounted sysroot.mount. May 15 00:57:02.148283 systemd[1]: Reached target initrd-root-fs.target. May 15 00:57:02.160140 systemd[1]: Mounting sysroot-usr.mount... May 15 00:57:02.160534 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 00:57:02.160569 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 00:57:02.160588 systemd[1]: Reached target ignition-diskful.target. May 15 00:57:02.168215 systemd[1]: Mounted sysroot-usr.mount. May 15 00:57:02.169606 systemd[1]: Starting initrd-setup-root.service... May 15 00:57:02.174599 initrd-setup-root[765]: cut: /sysroot/etc/passwd: No such file or directory May 15 00:57:02.178009 initrd-setup-root[773]: cut: /sysroot/etc/group: No such file or directory May 15 00:57:02.181088 initrd-setup-root[781]: cut: /sysroot/etc/shadow: No such file or directory May 15 00:57:02.184603 initrd-setup-root[789]: cut: /sysroot/etc/gshadow: No such file or directory May 15 00:57:02.207524 systemd[1]: Finished initrd-setup-root.service. May 15 00:57:02.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:02.218692 systemd[1]: Starting ignition-mount.service... May 15 00:57:02.220024 systemd[1]: Starting sysroot-boot.service... May 15 00:57:02.223121 bash[806]: umount: /sysroot/usr/share/oem: not mounted. May 15 00:57:02.232660 ignition[808]: INFO : Ignition 2.14.0 May 15 00:57:02.233860 ignition[808]: INFO : Stage: mount May 15 00:57:02.233860 ignition[808]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:57:02.233860 ignition[808]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:57:02.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:02.238131 ignition[808]: INFO : mount: mount passed May 15 00:57:02.238131 ignition[808]: INFO : Ignition finished successfully May 15 00:57:02.234839 systemd[1]: Finished ignition-mount.service. May 15 00:57:02.241415 systemd[1]: Finished sysroot-boot.service. May 15 00:57:02.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:02.697988 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 00:57:02.704464 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) May 15 00:57:02.704494 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 15 00:57:02.704507 kernel: BTRFS info (device vda6): using free space tree May 15 00:57:02.705286 kernel: BTRFS info (device vda6): has skinny extents May 15 00:57:02.708671 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 00:57:02.710268 systemd[1]: Starting ignition-files.service... May 15 00:57:02.722107 ignition[836]: INFO : Ignition 2.14.0 May 15 00:57:02.722107 ignition[836]: INFO : Stage: files May 15 00:57:02.724053 ignition[836]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:57:02.724053 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:57:02.724053 ignition[836]: DEBUG : files: compiled without relabeling support, skipping May 15 00:57:02.724053 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 00:57:02.724053 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 00:57:02.731099 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 00:57:02.731099 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 00:57:02.731099 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 00:57:02.731099 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 00:57:02.731099 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 00:57:02.731099 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 00:57:02.731099 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 15 00:57:02.726495 unknown[836]: wrote ssh authorized keys file for user: core May 15 00:57:02.828972 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 00:57:02.971084 systemd-networkd[720]: eth0: Gained IPv6LL May 15 00:57:02.978197 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:57:02.980105 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:57:03.005063 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 15 00:57:03.454646 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 15 00:57:03.788894 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 15 00:57:03.788894 ignition[836]: INFO : files: op(c): [started] processing unit "containerd.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 00:57:03.793044 ignition[836]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 00:57:03.793044 ignition[836]: INFO : files: op(c): [finished] processing unit "containerd.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(e): [started] processing unit "prepare-helm.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 15 00:57:03.793044 ignition[836]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:57:03.822797 ignition[836]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 00:57:03.822797 ignition[836]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 15 00:57:03.822797 ignition[836]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 00:57:03.822797 ignition[836]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 00:57:03.822797 ignition[836]: INFO : files: files passed May 15 00:57:03.822797 ignition[836]: INFO : Ignition finished successfully May 15 00:57:03.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.822236 systemd[1]: Finished ignition-files.service. May 15 00:57:03.824863 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 00:57:03.826844 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 00:57:03.844441 initrd-setup-root-after-ignition[862]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 00:57:03.827542 systemd[1]: Starting ignition-quench.service... May 15 00:57:03.847387 initrd-setup-root-after-ignition[864]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 00:57:03.830378 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 00:57:03.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.830483 systemd[1]: Finished ignition-quench.service. May 15 00:57:03.832868 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 00:57:03.835501 systemd[1]: Reached target ignition-complete.target. May 15 00:57:03.838216 systemd[1]: Starting initrd-parse-etc.service... May 15 00:57:03.849661 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 00:57:03.849767 systemd[1]: Finished initrd-parse-etc.service. May 15 00:57:03.851632 systemd[1]: Reached target initrd-fs.target. May 15 00:57:03.853575 systemd[1]: Reached target initrd.target. May 15 00:57:03.854572 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 00:57:03.855548 systemd[1]: Starting dracut-pre-pivot.service... May 15 00:57:03.866437 systemd[1]: Finished dracut-pre-pivot.service. May 15 00:57:03.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.869339 systemd[1]: Starting initrd-cleanup.service... May 15 00:57:03.879108 systemd[1]: Stopped target nss-lookup.target. May 15 00:57:03.881242 systemd[1]: Stopped target remote-cryptsetup.target. May 15 00:57:03.883387 systemd[1]: Stopped target timers.target. May 15 00:57:03.885050 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 00:57:03.886123 systemd[1]: Stopped dracut-pre-pivot.service. May 15 00:57:03.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.888023 systemd[1]: Stopped target initrd.target. May 15 00:57:03.889666 systemd[1]: Stopped target basic.target. May 15 00:57:03.891490 systemd[1]: Stopped target ignition-complete.target. May 15 00:57:03.893920 systemd[1]: Stopped target ignition-diskful.target. May 15 00:57:03.896380 systemd[1]: Stopped target initrd-root-device.target. May 15 00:57:03.898827 systemd[1]: Stopped target remote-fs.target. May 15 00:57:03.901037 systemd[1]: Stopped target remote-fs-pre.target. May 15 00:57:03.903412 systemd[1]: Stopped target sysinit.target. May 15 00:57:03.905563 systemd[1]: Stopped target local-fs.target. May 15 00:57:03.907730 systemd[1]: Stopped target local-fs-pre.target. May 15 00:57:03.910011 systemd[1]: Stopped target swap.target. May 15 00:57:03.912027 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 00:57:03.913423 systemd[1]: Stopped dracut-pre-mount.service. May 15 00:57:03.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.915785 systemd[1]: Stopped target cryptsetup.target. May 15 00:57:03.917937 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 00:57:03.919305 systemd[1]: Stopped dracut-initqueue.service. May 15 00:57:03.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.921564 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 00:57:03.922845 systemd[1]: Stopped ignition-fetch-offline.service. May 15 00:57:03.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.924748 systemd[1]: Stopped target paths.target. May 15 00:57:03.926282 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 00:57:03.932016 systemd[1]: Stopped systemd-ask-password-console.path. May 15 00:57:03.933870 systemd[1]: Stopped target slices.target. May 15 00:57:03.935440 systemd[1]: Stopped target sockets.target. May 15 00:57:03.937044 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 00:57:03.938252 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 00:57:03.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.940292 systemd[1]: ignition-files.service: Deactivated successfully. May 15 00:57:03.941266 systemd[1]: Stopped ignition-files.service. May 15 00:57:03.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.943593 systemd[1]: Stopping ignition-mount.service... May 15 00:57:03.945254 systemd[1]: Stopping iscsid.service... May 15 00:57:03.946099 iscsid[727]: iscsid shutting down. May 15 00:57:03.948462 systemd[1]: Stopping sysroot-boot.service... May 15 00:57:03.949220 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 00:57:03.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.951675 ignition[877]: INFO : Ignition 2.14.0 May 15 00:57:03.951675 ignition[877]: INFO : Stage: umount May 15 00:57:03.951675 ignition[877]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 00:57:03.951675 ignition[877]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 00:57:03.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.949410 systemd[1]: Stopped systemd-udev-trigger.service. May 15 00:57:03.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.958475 ignition[877]: INFO : umount: umount passed May 15 00:57:03.958475 ignition[877]: INFO : Ignition finished successfully May 15 00:57:03.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.950489 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 00:57:03.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.950629 systemd[1]: Stopped dracut-pre-trigger.service. May 15 00:57:03.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.954383 systemd[1]: iscsid.service: Deactivated successfully. May 15 00:57:03.954463 systemd[1]: Stopped iscsid.service. May 15 00:57:03.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.955249 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 00:57:03.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.955316 systemd[1]: Stopped ignition-mount.service. May 15 00:57:03.957967 systemd[1]: iscsid.socket: Deactivated successfully. May 15 00:57:03.958056 systemd[1]: Closed iscsid.socket. May 15 00:57:03.959116 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 00:57:03.959145 systemd[1]: Stopped ignition-disks.service. May 15 00:57:03.959877 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 00:57:03.959914 systemd[1]: Stopped ignition-kargs.service. May 15 00:57:03.962190 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 00:57:03.962225 systemd[1]: Stopped ignition-setup.service. May 15 00:57:03.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.963965 systemd[1]: Stopping iscsiuio.service... May 15 00:57:03.965009 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 00:57:03.965088 systemd[1]: Finished initrd-cleanup.service. May 15 00:57:03.967131 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 00:57:03.967223 systemd[1]: Stopped iscsiuio.service. May 15 00:57:03.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.969124 systemd[1]: Stopped target network.target. May 15 00:57:03.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.969970 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 00:57:03.969998 systemd[1]: Closed iscsiuio.socket. May 15 00:57:03.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.971450 systemd[1]: Stopping systemd-networkd.service... May 15 00:57:03.973181 systemd[1]: Stopping systemd-resolved.service... May 15 00:57:03.976994 systemd-networkd[720]: eth0: DHCPv6 lease lost May 15 00:57:03.992000 audit: BPF prog-id=9 op=UNLOAD May 15 00:57:03.978070 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 00:57:03.978139 systemd[1]: Stopped systemd-networkd.service. May 15 00:57:03.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.980650 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 00:57:03.980682 systemd[1]: Closed systemd-networkd.socket. May 15 00:57:03.983287 systemd[1]: Stopping network-cleanup.service... May 15 00:57:03.984660 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 00:57:03.984704 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 00:57:03.986449 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 00:57:04.002000 audit: BPF prog-id=6 op=UNLOAD May 15 00:57:04.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.986483 systemd[1]: Stopped systemd-sysctl.service. May 15 00:57:04.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.988653 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 00:57:03.988700 systemd[1]: Stopped systemd-modules-load.service. May 15 00:57:03.990472 systemd[1]: Stopping systemd-udevd.service... May 15 00:57:03.994619 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 00:57:04.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.995177 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 00:57:04.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:03.995296 systemd[1]: Stopped systemd-resolved.service. May 15 00:57:04.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.001872 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 00:57:04.001995 systemd[1]: Stopped systemd-udevd.service. May 15 00:57:04.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.004166 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 00:57:04.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.004240 systemd[1]: Stopped network-cleanup.service. May 15 00:57:04.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.006103 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 00:57:04.006141 systemd[1]: Closed systemd-udevd-control.socket. May 15 00:57:04.007850 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 00:57:04.007877 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 00:57:04.010994 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 00:57:04.011026 systemd[1]: Stopped dracut-pre-udev.service. May 15 00:57:04.012023 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 00:57:04.012052 systemd[1]: Stopped dracut-cmdline.service. May 15 00:57:04.015432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 00:57:04.015466 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 00:57:04.018048 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 00:57:04.018134 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 00:57:04.018181 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 15 00:57:04.021023 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 00:57:04.021058 systemd[1]: Stopped kmod-static-nodes.service. May 15 00:57:04.023105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 00:57:04.023135 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 00:57:04.025654 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 00:57:04.025712 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 15 00:57:04.026166 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 00:57:04.026230 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 00:57:04.087104 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 00:57:04.087213 systemd[1]: Stopped sysroot-boot.service. May 15 00:57:04.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.088711 systemd[1]: Reached target initrd-switch-root.target. May 15 00:57:04.091559 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 00:57:04.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:04.091597 systemd[1]: Stopped initrd-setup-root.service. May 15 00:57:04.094939 systemd[1]: Starting initrd-switch-root.service... May 15 00:57:04.105216 systemd[1]: Switching root. May 15 00:57:04.105000 audit: BPF prog-id=8 op=UNLOAD May 15 00:57:04.105000 audit: BPF prog-id=7 op=UNLOAD May 15 00:57:04.107000 audit: BPF prog-id=5 op=UNLOAD May 15 00:57:04.107000 audit: BPF prog-id=4 op=UNLOAD May 15 00:57:04.107000 audit: BPF prog-id=3 op=UNLOAD May 15 00:57:04.123478 systemd-journald[198]: Journal stopped May 15 00:57:06.701233 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). May 15 00:57:06.701276 kernel: SELinux: Class mctp_socket not defined in policy. May 15 00:57:06.701292 kernel: SELinux: Class anon_inode not defined in policy. May 15 00:57:06.701302 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 00:57:06.701311 kernel: SELinux: policy capability network_peer_controls=1 May 15 00:57:06.701320 kernel: SELinux: policy capability open_perms=1 May 15 00:57:06.701329 kernel: SELinux: policy capability extended_socket_class=1 May 15 00:57:06.701338 kernel: SELinux: policy capability always_check_network=0 May 15 00:57:06.701347 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 00:57:06.701358 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 00:57:06.701367 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 00:57:06.701377 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 00:57:06.701386 kernel: kauditd_printk_skb: 71 callbacks suppressed May 15 00:57:06.701399 kernel: audit: type=1403 audit(1747270624.206:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 00:57:06.701410 systemd[1]: Successfully loaded SELinux policy in 37.486ms. May 15 00:57:06.701424 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.612ms. May 15 00:57:06.701435 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 00:57:06.701446 systemd[1]: Detected virtualization kvm. May 15 00:57:06.701459 systemd[1]: Detected architecture x86-64. May 15 00:57:06.701469 systemd[1]: Detected first boot. May 15 00:57:06.701479 systemd[1]: Initializing machine ID from VM UUID. May 15 00:57:06.701488 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 00:57:06.701499 kernel: audit: type=1400 audit(1747270624.458:83): avc: denied { associate } for pid=927 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 00:57:06.701511 kernel: audit: type=1300 audit(1747270624.458:83): arch=c000003e syscall=188 success=yes exit=0 a0=c000157672 a1=c0000daae0 a2=c0000e2a00 a3=32 items=0 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:06.701521 kernel: audit: type=1327 audit(1747270624.458:83): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:57:06.701531 kernel: audit: type=1400 audit(1747270624.460:84): avc: denied { associate } for pid=927 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 00:57:06.701541 kernel: audit: type=1300 audit(1747270624.460:84): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000157749 a2=1ed a3=0 items=2 ppid=910 pid=927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:06.701553 kernel: audit: type=1307 audit(1747270624.460:84): cwd="/" May 15 00:57:06.701563 kernel: audit: type=1302 audit(1747270624.460:84): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:06.701573 kernel: audit: type=1302 audit(1747270624.460:84): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:06.701586 kernel: audit: type=1327 audit(1747270624.460:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 00:57:06.701598 systemd[1]: Populated /etc with preset unit settings. May 15 00:57:06.701609 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:57:06.701619 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:57:06.701631 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:57:06.701643 systemd[1]: Queued start job for default target multi-user.target. May 15 00:57:06.701652 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 00:57:06.701662 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 00:57:06.701672 systemd[1]: Created slice system-addon\x2drun.slice. May 15 00:57:06.701682 systemd[1]: Created slice system-getty.slice. May 15 00:57:06.701691 systemd[1]: Created slice system-modprobe.slice. May 15 00:57:06.701702 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 00:57:06.701713 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 00:57:06.701723 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 00:57:06.701733 systemd[1]: Created slice user.slice. May 15 00:57:06.701742 systemd[1]: Started systemd-ask-password-console.path. May 15 00:57:06.701753 systemd[1]: Started systemd-ask-password-wall.path. May 15 00:57:06.701764 systemd[1]: Set up automount boot.automount. May 15 00:57:06.701773 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 00:57:06.701783 systemd[1]: Reached target integritysetup.target. May 15 00:57:06.701793 systemd[1]: Reached target remote-cryptsetup.target. May 15 00:57:06.701805 systemd[1]: Reached target remote-fs.target. May 15 00:57:06.701816 systemd[1]: Reached target slices.target. May 15 00:57:06.701826 systemd[1]: Reached target swap.target. May 15 00:57:06.701836 systemd[1]: Reached target torcx.target. May 15 00:57:06.701846 systemd[1]: Reached target veritysetup.target. May 15 00:57:06.701856 systemd[1]: Listening on systemd-coredump.socket. May 15 00:57:06.701869 systemd[1]: Listening on systemd-initctl.socket. May 15 00:57:06.701882 systemd[1]: Listening on systemd-journald-audit.socket. May 15 00:57:06.701894 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 00:57:06.701907 systemd[1]: Listening on systemd-journald.socket. May 15 00:57:06.701922 systemd[1]: Listening on systemd-networkd.socket. May 15 00:57:06.701935 systemd[1]: Listening on systemd-udevd-control.socket. May 15 00:57:06.701947 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 00:57:06.701969 systemd[1]: Listening on systemd-userdbd.socket. May 15 00:57:06.701979 systemd[1]: Mounting dev-hugepages.mount... May 15 00:57:06.701990 systemd[1]: Mounting dev-mqueue.mount... May 15 00:57:06.702000 systemd[1]: Mounting media.mount... May 15 00:57:06.702010 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:06.702020 systemd[1]: Mounting sys-kernel-debug.mount... May 15 00:57:06.702032 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 00:57:06.702044 systemd[1]: Mounting tmp.mount... May 15 00:57:06.702054 systemd[1]: Starting flatcar-tmpfiles.service... May 15 00:57:06.702064 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:57:06.702074 systemd[1]: Starting kmod-static-nodes.service... May 15 00:57:06.702083 systemd[1]: Starting modprobe@configfs.service... May 15 00:57:06.702093 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:57:06.702102 systemd[1]: Starting modprobe@drm.service... May 15 00:57:06.702119 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:57:06.702129 systemd[1]: Starting modprobe@fuse.service... May 15 00:57:06.702140 systemd[1]: Starting modprobe@loop.service... May 15 00:57:06.702150 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 00:57:06.702161 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 00:57:06.702171 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 15 00:57:06.702182 systemd[1]: Starting systemd-journald.service... May 15 00:57:06.702191 kernel: loop: module loaded May 15 00:57:06.702200 kernel: fuse: init (API version 7.34) May 15 00:57:06.702210 systemd[1]: Starting systemd-modules-load.service... May 15 00:57:06.702220 systemd[1]: Starting systemd-network-generator.service... May 15 00:57:06.702232 systemd[1]: Starting systemd-remount-fs.service... May 15 00:57:06.702242 systemd[1]: Starting systemd-udev-trigger.service... May 15 00:57:06.702252 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:06.702261 systemd[1]: Mounted dev-hugepages.mount. May 15 00:57:06.702274 systemd-journald[1026]: Journal started May 15 00:57:06.702315 systemd-journald[1026]: Runtime Journal (/run/log/journal/768d8987db4f4db9be87503512234687) is 6.0M, max 48.4M, 42.4M free. May 15 00:57:06.619000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 00:57:06.619000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 15 00:57:06.697000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 00:57:06.697000 audit[1026]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcfa977d90 a2=4000 a3=7ffcfa977e2c items=0 ppid=1 pid=1026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:06.697000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 00:57:06.704971 systemd[1]: Started systemd-journald.service. May 15 00:57:06.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.705755 systemd[1]: Mounted dev-mqueue.mount. May 15 00:57:06.706645 systemd[1]: Mounted media.mount. May 15 00:57:06.707462 systemd[1]: Mounted sys-kernel-debug.mount. May 15 00:57:06.708376 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 00:57:06.709302 systemd[1]: Mounted tmp.mount. May 15 00:57:06.710478 systemd[1]: Finished flatcar-tmpfiles.service. May 15 00:57:06.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.711819 systemd[1]: Finished kmod-static-nodes.service. May 15 00:57:06.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.712924 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 00:57:06.713194 systemd[1]: Finished modprobe@configfs.service. May 15 00:57:06.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.714283 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:57:06.714510 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:57:06.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.715696 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:57:06.715898 systemd[1]: Finished modprobe@drm.service. May 15 00:57:06.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.716920 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:57:06.717135 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:57:06.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.718263 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 00:57:06.718464 systemd[1]: Finished modprobe@fuse.service. May 15 00:57:06.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.719565 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:57:06.719794 systemd[1]: Finished modprobe@loop.service. May 15 00:57:06.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.721074 systemd[1]: Finished systemd-modules-load.service. May 15 00:57:06.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.722523 systemd[1]: Finished systemd-network-generator.service. May 15 00:57:06.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.723939 systemd[1]: Finished systemd-remount-fs.service. May 15 00:57:06.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.725162 systemd[1]: Reached target network-pre.target. May 15 00:57:06.727084 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 00:57:06.728882 systemd[1]: Mounting sys-kernel-config.mount... May 15 00:57:06.729898 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 00:57:06.732268 systemd[1]: Starting systemd-hwdb-update.service... May 15 00:57:06.733980 systemd[1]: Starting systemd-journal-flush.service... May 15 00:57:06.735158 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:57:06.736165 systemd[1]: Starting systemd-random-seed.service... May 15 00:57:06.737127 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:57:06.738055 systemd[1]: Starting systemd-sysctl.service... May 15 00:57:06.739985 systemd[1]: Starting systemd-sysusers.service... May 15 00:57:06.740933 systemd-journald[1026]: Time spent on flushing to /var/log/journal/768d8987db4f4db9be87503512234687 is 12.808ms for 1104 entries. May 15 00:57:06.740933 systemd-journald[1026]: System Journal (/var/log/journal/768d8987db4f4db9be87503512234687) is 8.0M, max 195.6M, 187.6M free. May 15 00:57:06.993947 systemd-journald[1026]: Received client request to flush runtime journal. May 15 00:57:06.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:06.744368 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 00:57:06.745329 systemd[1]: Mounted sys-kernel-config.mount. May 15 00:57:06.994426 udevadm[1059]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 00:57:06.748571 systemd[1]: Finished systemd-udev-trigger.service. May 15 00:57:06.750763 systemd[1]: Starting systemd-udev-settle.service... May 15 00:57:06.776939 systemd[1]: Finished systemd-sysctl.service. May 15 00:57:06.779897 systemd[1]: Finished systemd-sysusers.service. May 15 00:57:06.782716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 00:57:06.797769 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 00:57:06.936837 systemd[1]: Finished systemd-random-seed.service. May 15 00:57:06.937887 systemd[1]: Reached target first-boot-complete.target. May 15 00:57:06.994876 systemd[1]: Finished systemd-journal-flush.service. May 15 00:57:06.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.236460 systemd[1]: Finished systemd-hwdb-update.service. May 15 00:57:07.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.238562 systemd[1]: Starting systemd-udevd.service... May 15 00:57:07.254860 systemd-udevd[1070]: Using default interface naming scheme 'v252'. May 15 00:57:07.267634 systemd[1]: Started systemd-udevd.service. May 15 00:57:07.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.279473 systemd[1]: Starting systemd-networkd.service... May 15 00:57:07.283904 systemd[1]: Starting systemd-userdbd.service... May 15 00:57:07.311583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 00:57:07.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.317210 systemd[1]: Started systemd-userdbd.service. May 15 00:57:07.320666 systemd[1]: Found device dev-ttyS0.device. May 15 00:57:07.346981 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 15 00:57:07.352974 kernel: ACPI: button: Power Button [PWRF] May 15 00:57:07.358668 systemd-networkd[1089]: lo: Link UP May 15 00:57:07.358681 systemd-networkd[1089]: lo: Gained carrier May 15 00:57:07.359130 systemd-networkd[1089]: Enumeration completed May 15 00:57:07.359222 systemd[1]: Started systemd-networkd.service. May 15 00:57:07.359397 systemd-networkd[1089]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 00:57:07.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.360860 systemd-networkd[1089]: eth0: Link UP May 15 00:57:07.360870 systemd-networkd[1089]: eth0: Gained carrier May 15 00:57:07.362000 audit[1080]: AVC avc: denied { confidentiality } for pid=1080 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 15 00:57:07.362000 audit[1080]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55e3afc95e00 a1=338ac a2=7f3e66f07bc5 a3=5 items=110 ppid=1070 pid=1080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:07.362000 audit: CWD cwd="/" May 15 00:57:07.362000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=1 name=(null) inode=906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=2 name=(null) inode=906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=3 name=(null) inode=907 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=4 name=(null) inode=906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=5 name=(null) inode=908 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=6 name=(null) inode=906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=7 name=(null) inode=909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=8 name=(null) inode=909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=9 name=(null) inode=910 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=10 name=(null) inode=909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=11 name=(null) inode=911 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=12 name=(null) inode=909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=13 name=(null) inode=912 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=14 name=(null) inode=909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=15 name=(null) inode=913 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=16 name=(null) inode=909 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=17 name=(null) inode=914 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=18 name=(null) inode=906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=19 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=20 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=21 name=(null) inode=916 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=22 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=23 name=(null) inode=917 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=24 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=25 name=(null) inode=918 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=26 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=27 name=(null) inode=919 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=28 name=(null) inode=915 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=29 name=(null) inode=920 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=30 name=(null) inode=906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=31 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=32 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=33 name=(null) inode=922 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=34 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=35 name=(null) inode=923 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=36 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=37 name=(null) inode=924 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=38 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=39 name=(null) inode=925 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=40 name=(null) inode=921 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=41 name=(null) inode=926 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=42 name=(null) inode=906 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=43 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=44 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=45 name=(null) inode=928 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=46 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=47 name=(null) inode=929 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=48 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=49 name=(null) inode=930 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=50 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=51 name=(null) inode=931 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=52 name=(null) inode=927 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=53 name=(null) inode=932 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=55 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=56 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=57 name=(null) inode=934 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=58 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=59 name=(null) inode=935 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=60 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=61 name=(null) inode=936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=62 name=(null) inode=936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=63 name=(null) inode=937 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=64 name=(null) inode=936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=65 name=(null) inode=938 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=66 name=(null) inode=936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=67 name=(null) inode=939 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=68 name=(null) inode=936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=69 name=(null) inode=940 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=70 name=(null) inode=936 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=71 name=(null) inode=941 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=72 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=73 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=74 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=75 name=(null) inode=943 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=76 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=77 name=(null) inode=944 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=78 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=79 name=(null) inode=945 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=80 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=81 name=(null) inode=946 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=82 name=(null) inode=942 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=83 name=(null) inode=947 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=84 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=85 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=86 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=87 name=(null) inode=949 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=88 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=89 name=(null) inode=950 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=90 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=91 name=(null) inode=951 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=92 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=93 name=(null) inode=952 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=94 name=(null) inode=948 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=95 name=(null) inode=953 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=96 name=(null) inode=933 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=97 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=98 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=99 name=(null) inode=955 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=100 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=101 name=(null) inode=956 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=102 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=103 name=(null) inode=957 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=104 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=105 name=(null) inode=958 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=106 name=(null) inode=954 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=107 name=(null) inode=959 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PATH item=109 name=(null) inode=960 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:57:07.362000 audit: PROCTITLE proctitle="(udev-worker)" May 15 00:57:07.373975 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 15 00:57:07.375116 systemd-networkd[1089]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 00:57:07.388291 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device May 15 00:57:07.394545 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt May 15 00:57:07.394654 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) May 15 00:57:07.394763 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD May 15 00:57:07.394855 kernel: mousedev: PS/2 mouse device common for all mice May 15 00:57:07.448363 kernel: kvm: Nested Virtualization enabled May 15 00:57:07.448460 kernel: SVM: kvm: Nested Paging enabled May 15 00:57:07.448474 kernel: SVM: Virtual VMLOAD VMSAVE supported May 15 00:57:07.448486 kernel: SVM: Virtual GIF supported May 15 00:57:07.463982 kernel: EDAC MC: Ver: 3.0.0 May 15 00:57:07.489393 systemd[1]: Finished systemd-udev-settle.service. May 15 00:57:07.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.491574 systemd[1]: Starting lvm2-activation-early.service... May 15 00:57:07.499436 lvm[1108]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:57:07.527679 systemd[1]: Finished lvm2-activation-early.service. May 15 00:57:07.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.528692 systemd[1]: Reached target cryptsetup.target. May 15 00:57:07.530469 systemd[1]: Starting lvm2-activation.service... May 15 00:57:07.533981 lvm[1110]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 00:57:07.558620 systemd[1]: Finished lvm2-activation.service. May 15 00:57:07.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.559577 systemd[1]: Reached target local-fs-pre.target. May 15 00:57:07.560448 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 00:57:07.560471 systemd[1]: Reached target local-fs.target. May 15 00:57:07.561308 systemd[1]: Reached target machines.target. May 15 00:57:07.563113 systemd[1]: Starting ldconfig.service... May 15 00:57:07.564077 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:57:07.564130 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:57:07.565033 systemd[1]: Starting systemd-boot-update.service... May 15 00:57:07.566999 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 00:57:07.568985 systemd[1]: Starting systemd-machine-id-commit.service... May 15 00:57:07.571306 systemd[1]: Starting systemd-sysext.service... May 15 00:57:07.575057 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1113 (bootctl) May 15 00:57:07.576231 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 00:57:07.583840 systemd[1]: Unmounting usr-share-oem.mount... May 15 00:57:07.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.587487 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 00:57:07.588861 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 00:57:07.589059 systemd[1]: Unmounted usr-share-oem.mount. May 15 00:57:07.599990 kernel: loop0: detected capacity change from 0 to 210664 May 15 00:57:07.608825 systemd-fsck[1121]: fsck.fat 4.2 (2021-01-31) May 15 00:57:07.608825 systemd-fsck[1121]: /dev/vda1: 791 files, 120710/258078 clusters May 15 00:57:07.610236 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 00:57:07.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.612837 systemd[1]: Mounting boot.mount... May 15 00:57:07.621263 systemd[1]: Mounted boot.mount. May 15 00:57:07.762989 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 00:57:07.766203 systemd[1]: Finished systemd-boot-update.service. May 15 00:57:07.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.771250 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 00:57:07.771828 systemd[1]: Finished systemd-machine-id-commit.service. May 15 00:57:07.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.776991 kernel: loop1: detected capacity change from 0 to 210664 May 15 00:57:07.780680 (sd-sysext)[1134]: Using extensions 'kubernetes'. May 15 00:57:07.781020 (sd-sysext)[1134]: Merged extensions into '/usr'. May 15 00:57:07.795463 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:07.796821 systemd[1]: Mounting usr-share-oem.mount... May 15 00:57:07.797826 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:57:07.798939 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:57:07.800697 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:57:07.802896 systemd[1]: Starting modprobe@loop.service... May 15 00:57:07.803725 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:57:07.803876 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:57:07.804025 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:07.806551 systemd[1]: Mounted usr-share-oem.mount. May 15 00:57:07.807632 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:57:07.807845 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:57:07.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.809148 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:57:07.809276 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:57:07.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.810673 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:57:07.810830 systemd[1]: Finished modprobe@loop.service. May 15 00:57:07.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.812252 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:57:07.812353 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:57:07.813429 systemd[1]: Finished systemd-sysext.service. May 15 00:57:07.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:07.815693 systemd[1]: Starting ensure-sysext.service... May 15 00:57:07.817455 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 00:57:07.823194 systemd[1]: Reloading. May 15 00:57:07.827058 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 00:57:07.827685 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 00:57:07.829020 systemd-tmpfiles[1148]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 00:57:07.832062 ldconfig[1112]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 00:57:07.865827 /usr/lib/systemd/system-generators/torcx-generator[1169]: time="2025-05-15T00:57:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:57:07.865849 /usr/lib/systemd/system-generators/torcx-generator[1169]: time="2025-05-15T00:57:07Z" level=info msg="torcx already run" May 15 00:57:07.938921 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:57:07.938938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:57:07.957304 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:57:08.007019 systemd[1]: Finished ldconfig.service. May 15 00:57:08.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.008878 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 00:57:08.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.011727 systemd[1]: Starting audit-rules.service... May 15 00:57:08.013488 systemd[1]: Starting clean-ca-certificates.service... May 15 00:57:08.015481 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 00:57:08.017683 systemd[1]: Starting systemd-resolved.service... May 15 00:57:08.019598 systemd[1]: Starting systemd-timesyncd.service... May 15 00:57:08.021279 systemd[1]: Starting systemd-update-utmp.service... May 15 00:57:08.023006 systemd[1]: Finished clean-ca-certificates.service. May 15 00:57:08.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.025856 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:57:08.026000 audit[1228]: SYSTEM_BOOT pid=1228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 00:57:08.030207 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:08.030438 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:57:08.031692 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:57:08.033654 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:57:08.035546 systemd[1]: Starting modprobe@loop.service... May 15 00:57:08.036381 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:57:08.036604 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:57:08.036774 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:57:08.036891 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:08.038298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:57:08.038467 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:57:08.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.039876 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:57:08.040029 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:57:08.041360 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:57:08.041512 systemd[1]: Finished modprobe@loop.service. May 15 00:57:08.042932 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 00:57:08.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.044721 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:57:08.044857 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:57:08.046344 systemd[1]: Starting systemd-update-done.service... May 15 00:57:08.048970 systemd[1]: Finished systemd-update-utmp.service. May 15 00:57:08.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.052375 systemd[1]: Finished systemd-update-done.service. May 15 00:57:08.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.053818 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:08.054125 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:57:08.055283 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:57:08.057069 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:57:08.058857 systemd[1]: Starting modprobe@loop.service... May 15 00:57:08.059789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:57:08.059890 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:57:08.059983 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:57:08.060045 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:08.060884 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:57:08.061043 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:57:08.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.062875 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:57:08.063015 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:57:08.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.064613 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:57:08.064857 systemd[1]: Finished modprobe@loop.service. May 15 00:57:08.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.066395 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:57:08.066476 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:57:08.068820 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:08.069036 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 00:57:08.070611 systemd[1]: Starting modprobe@dm_mod.service... May 15 00:57:08.072498 systemd[1]: Starting modprobe@drm.service... May 15 00:57:08.074425 systemd[1]: Starting modprobe@efi_pstore.service... May 15 00:57:08.076390 systemd[1]: Starting modprobe@loop.service... May 15 00:57:08.077475 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 00:57:08.077594 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:57:08.078665 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 00:57:08.079819 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 00:57:08.079937 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 15 00:57:08.081425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 00:57:08.081657 systemd[1]: Finished modprobe@dm_mod.service. May 15 00:57:08.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:08.082000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 00:57:08.082000 audit[1261]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe09692a60 a2=420 a3=0 items=0 ppid=1219 pid=1261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:08.082000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 00:57:08.083894 augenrules[1261]: No rules May 15 00:57:08.084066 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 00:57:08.084266 systemd[1]: Finished modprobe@drm.service. May 15 00:57:08.085845 systemd[1]: Finished audit-rules.service. May 15 00:57:08.087446 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 00:57:08.087612 systemd[1]: Finished modprobe@efi_pstore.service. May 15 00:57:08.089076 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 00:57:08.089263 systemd[1]: Finished modprobe@loop.service. May 15 00:57:08.090738 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 00:57:08.090860 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 00:57:08.092279 systemd[1]: Finished ensure-sysext.service. May 15 00:57:08.096903 systemd-resolved[1223]: Positive Trust Anchors: May 15 00:57:08.097173 systemd-resolved[1223]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 00:57:08.097273 systemd-resolved[1223]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 00:57:08.104261 systemd-resolved[1223]: Defaulting to hostname 'linux'. May 15 00:57:08.105601 systemd[1]: Started systemd-resolved.service. May 15 00:57:08.106588 systemd[1]: Reached target network.target. May 15 00:57:08.107444 systemd[1]: Reached target nss-lookup.target. May 15 00:57:08.108455 systemd[1]: Started systemd-timesyncd.service. May 15 00:57:08.109727 systemd-timesyncd[1227]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 00:57:08.109735 systemd[1]: Reached target sysinit.target. May 15 00:57:08.109770 systemd-timesyncd[1227]: Initial clock synchronization to Thu 2025-05-15 00:57:08.473351 UTC. May 15 00:57:08.110670 systemd[1]: Started motdgen.path. May 15 00:57:08.111453 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 00:57:08.112657 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 00:57:08.113578 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 00:57:08.113604 systemd[1]: Reached target paths.target. May 15 00:57:08.114428 systemd[1]: Reached target time-set.target. May 15 00:57:08.115430 systemd[1]: Started logrotate.timer. May 15 00:57:08.116365 systemd[1]: Started mdadm.timer. May 15 00:57:08.117163 systemd[1]: Reached target timers.target. May 15 00:57:08.118378 systemd[1]: Listening on dbus.socket. May 15 00:57:08.120298 systemd[1]: Starting docker.socket... May 15 00:57:08.121892 systemd[1]: Listening on sshd.socket. May 15 00:57:08.122796 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:57:08.123123 systemd[1]: Listening on docker.socket. May 15 00:57:08.124035 systemd[1]: Reached target sockets.target. May 15 00:57:08.124897 systemd[1]: Reached target basic.target. May 15 00:57:08.125839 systemd[1]: System is tainted: cgroupsv1 May 15 00:57:08.125876 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:57:08.125894 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 00:57:08.126784 systemd[1]: Starting containerd.service... May 15 00:57:08.128425 systemd[1]: Starting dbus.service... May 15 00:57:08.130136 systemd[1]: Starting enable-oem-cloudinit.service... May 15 00:57:08.131930 systemd[1]: Starting extend-filesystems.service... May 15 00:57:08.133005 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 00:57:08.134265 jq[1283]: false May 15 00:57:08.133913 systemd[1]: Starting motdgen.service... May 15 00:57:08.136071 systemd[1]: Starting prepare-helm.service... May 15 00:57:08.137818 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 00:57:08.139611 systemd[1]: Starting sshd-keygen.service... May 15 00:57:08.141995 systemd[1]: Starting systemd-logind.service... May 15 00:57:08.142773 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 00:57:08.142831 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 00:57:08.143745 systemd[1]: Starting update-engine.service... May 15 00:57:08.145422 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 00:57:08.147697 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 00:57:08.148185 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 00:57:08.149460 jq[1298]: true May 15 00:57:08.148982 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 00:57:08.149196 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 00:57:08.162627 jq[1303]: true May 15 00:57:08.158404 dbus-daemon[1282]: [system] SELinux support is enabled May 15 00:57:08.158630 systemd[1]: Started dbus.service. May 15 00:57:08.161162 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 00:57:08.161182 systemd[1]: Reached target system-config.target. May 15 00:57:08.163510 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 00:57:08.163524 systemd[1]: Reached target user-config.target. May 15 00:57:08.172740 extend-filesystems[1284]: Found loop1 May 15 00:57:08.172740 extend-filesystems[1284]: Found sr0 May 15 00:57:08.172740 extend-filesystems[1284]: Found vda May 15 00:57:08.172740 extend-filesystems[1284]: Found vda1 May 15 00:57:08.172740 extend-filesystems[1284]: Found vda2 May 15 00:57:08.172740 extend-filesystems[1284]: Found vda3 May 15 00:57:08.172740 extend-filesystems[1284]: Found usr May 15 00:57:08.172740 extend-filesystems[1284]: Found vda4 May 15 00:57:08.172740 extend-filesystems[1284]: Found vda6 May 15 00:57:08.172740 extend-filesystems[1284]: Found vda7 May 15 00:57:08.172740 extend-filesystems[1284]: Found vda9 May 15 00:57:08.172740 extend-filesystems[1284]: Checking size of /dev/vda9 May 15 00:57:08.170059 systemd[1]: motdgen.service: Deactivated successfully. May 15 00:57:08.197487 tar[1301]: linux-amd64/helm May 15 00:57:08.197692 extend-filesystems[1284]: Resized partition /dev/vda9 May 15 00:57:08.170280 systemd[1]: Finished motdgen.service. May 15 00:57:08.200244 extend-filesystems[1340]: resize2fs 1.46.5 (30-Dec-2021) May 15 00:57:08.201722 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 00:57:08.207838 update_engine[1294]: I0515 00:57:08.207713 1294 main.cc:92] Flatcar Update Engine starting May 15 00:57:08.220584 update_engine[1294]: I0515 00:57:08.210191 1294 update_check_scheduler.cc:74] Next update check in 9m31s May 15 00:57:08.210009 systemd[1]: Started update-engine.service. May 15 00:57:08.212263 systemd[1]: Started locksmithd.service. May 15 00:57:08.222595 systemd-logind[1293]: Watching system buttons on /dev/input/event1 (Power Button) May 15 00:57:08.222612 systemd-logind[1293]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 15 00:57:08.223043 systemd-logind[1293]: New seat seat0. May 15 00:57:08.228152 systemd[1]: Started systemd-logind.service. May 15 00:57:08.230334 bash[1337]: Updated "/home/core/.ssh/authorized_keys" May 15 00:57:08.230717 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 00:57:08.232976 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 00:57:08.264231 extend-filesystems[1340]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 00:57:08.264231 extend-filesystems[1340]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 00:57:08.264231 extend-filesystems[1340]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 00:57:08.263367 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 00:57:08.272441 env[1307]: time="2025-05-15T00:57:08.260892457Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 00:57:08.272771 extend-filesystems[1284]: Resized filesystem in /dev/vda9 May 15 00:57:08.263581 systemd[1]: Finished extend-filesystems.service. May 15 00:57:08.284337 env[1307]: time="2025-05-15T00:57:08.284233625Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 00:57:08.284422 env[1307]: time="2025-05-15T00:57:08.284371343Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.285619185Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.285661855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.285968019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.285984861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.285996964Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.286005399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.286097592Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.286359604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.286556333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 00:57:08.286769 env[1307]: time="2025-05-15T00:57:08.286576641Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 00:57:08.287126 env[1307]: time="2025-05-15T00:57:08.286632266Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 00:57:08.287126 env[1307]: time="2025-05-15T00:57:08.286646903Z" level=info msg="metadata content store policy set" policy=shared May 15 00:57:08.291581 locksmithd[1342]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 00:57:08.298155 env[1307]: time="2025-05-15T00:57:08.298110113Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 00:57:08.298155 env[1307]: time="2025-05-15T00:57:08.298147683Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 00:57:08.298232 env[1307]: time="2025-05-15T00:57:08.298190975Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 00:57:08.298232 env[1307]: time="2025-05-15T00:57:08.298226291Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298309 env[1307]: time="2025-05-15T00:57:08.298240107Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298335 env[1307]: time="2025-05-15T00:57:08.298307684Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298357 env[1307]: time="2025-05-15T00:57:08.298321490Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298357 env[1307]: time="2025-05-15T00:57:08.298348180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298396 env[1307]: time="2025-05-15T00:57:08.298362136Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298396 env[1307]: time="2025-05-15T00:57:08.298376182Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298396 env[1307]: time="2025-05-15T00:57:08.298387363Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 00:57:08.298464 env[1307]: time="2025-05-15T00:57:08.298411058Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 00:57:08.298534 env[1307]: time="2025-05-15T00:57:08.298515403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 00:57:08.298628 env[1307]: time="2025-05-15T00:57:08.298603859Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299031091Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299092076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299110500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299172066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299187996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299200038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299210648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299222490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299235815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299246515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299257276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299271212Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299407107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299420552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 00:57:08.301606 env[1307]: time="2025-05-15T00:57:08.299431753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 00:57:08.300704 systemd[1]: Started containerd.service. May 15 00:57:08.302004 env[1307]: time="2025-05-15T00:57:08.299442834Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 00:57:08.302004 env[1307]: time="2025-05-15T00:57:08.299459315Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 00:57:08.302004 env[1307]: time="2025-05-15T00:57:08.299470936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 00:57:08.302004 env[1307]: time="2025-05-15T00:57:08.299488099Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 00:57:08.302004 env[1307]: time="2025-05-15T00:57:08.299524066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.299711859Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.299765138Z" level=info msg="Connect containerd service" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.299800274Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.300300824Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.300535304Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.300564719Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.300604273Z" level=info msg="containerd successfully booted in 0.065101s" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.300843211Z" level=info msg="Start subscribing containerd event" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.300895850Z" level=info msg="Start recovering state" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.301396139Z" level=info msg="Start event monitor" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.301425043Z" level=info msg="Start snapshots syncer" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.301443638Z" level=info msg="Start cni network conf syncer for default" May 15 00:57:08.302110 env[1307]: time="2025-05-15T00:57:08.301450701Z" level=info msg="Start streaming server" May 15 00:57:08.565269 tar[1301]: linux-amd64/LICENSE May 15 00:57:08.565269 tar[1301]: linux-amd64/README.md May 15 00:57:08.569087 systemd[1]: Finished prepare-helm.service. May 15 00:57:08.902690 sshd_keygen[1318]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 00:57:08.920780 systemd[1]: Finished sshd-keygen.service. May 15 00:57:08.923262 systemd[1]: Starting issuegen.service... May 15 00:57:08.927827 systemd[1]: issuegen.service: Deactivated successfully. May 15 00:57:08.928130 systemd[1]: Finished issuegen.service. May 15 00:57:08.930390 systemd[1]: Starting systemd-user-sessions.service... May 15 00:57:08.935733 systemd[1]: Finished systemd-user-sessions.service. May 15 00:57:08.938022 systemd[1]: Started getty@tty1.service. May 15 00:57:08.939734 systemd[1]: Started serial-getty@ttyS0.service. May 15 00:57:08.940803 systemd[1]: Reached target getty.target. May 15 00:57:09.307324 systemd-networkd[1089]: eth0: Gained IPv6LL May 15 00:57:09.309234 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 00:57:09.310728 systemd[1]: Reached target network-online.target. May 15 00:57:09.313037 systemd[1]: Starting kubelet.service... May 15 00:57:09.943797 systemd[1]: Started kubelet.service. May 15 00:57:09.945283 systemd[1]: Reached target multi-user.target. May 15 00:57:09.947799 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 00:57:09.955253 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 00:57:09.955597 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 00:57:09.960294 systemd[1]: Startup finished in 5.103s (kernel) + 5.792s (userspace) = 10.895s. May 15 00:57:10.409626 kubelet[1384]: E0515 00:57:10.409491 1384 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:57:10.411245 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:57:10.411393 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:57:12.483197 systemd[1]: Created slice system-sshd.slice. May 15 00:57:12.484303 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:36550.service. May 15 00:57:12.521734 sshd[1395]: Accepted publickey for core from 10.0.0.1 port 36550 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:12.523259 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:12.531665 systemd-logind[1293]: New session 1 of user core. May 15 00:57:12.532468 systemd[1]: Created slice user-500.slice. May 15 00:57:12.533336 systemd[1]: Starting user-runtime-dir@500.service... May 15 00:57:12.541435 systemd[1]: Finished user-runtime-dir@500.service. May 15 00:57:12.542526 systemd[1]: Starting user@500.service... May 15 00:57:12.545197 (systemd)[1399]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:12.613148 systemd[1399]: Queued start job for default target default.target. May 15 00:57:12.613327 systemd[1399]: Reached target paths.target. May 15 00:57:12.613342 systemd[1399]: Reached target sockets.target. May 15 00:57:12.613354 systemd[1399]: Reached target timers.target. May 15 00:57:12.613364 systemd[1399]: Reached target basic.target. May 15 00:57:12.613398 systemd[1399]: Reached target default.target. May 15 00:57:12.613416 systemd[1399]: Startup finished in 63ms. May 15 00:57:12.613499 systemd[1]: Started user@500.service. May 15 00:57:12.614358 systemd[1]: Started session-1.scope. May 15 00:57:12.665381 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:36554.service. May 15 00:57:12.698903 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 36554 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:12.700039 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:12.703403 systemd-logind[1293]: New session 2 of user core. May 15 00:57:12.704262 systemd[1]: Started session-2.scope. May 15 00:57:12.760167 sshd[1409]: pam_unix(sshd:session): session closed for user core May 15 00:57:12.762490 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:36568.service. May 15 00:57:12.762888 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:36554.service: Deactivated successfully. May 15 00:57:12.763726 systemd-logind[1293]: Session 2 logged out. Waiting for processes to exit. May 15 00:57:12.763763 systemd[1]: session-2.scope: Deactivated successfully. May 15 00:57:12.764706 systemd-logind[1293]: Removed session 2. May 15 00:57:12.798037 sshd[1414]: Accepted publickey for core from 10.0.0.1 port 36568 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:12.799104 sshd[1414]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:12.802319 systemd-logind[1293]: New session 3 of user core. May 15 00:57:12.803094 systemd[1]: Started session-3.scope. May 15 00:57:12.853529 sshd[1414]: pam_unix(sshd:session): session closed for user core May 15 00:57:12.855866 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:36574.service. May 15 00:57:12.856322 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:36568.service: Deactivated successfully. May 15 00:57:12.857468 systemd-logind[1293]: Session 3 logged out. Waiting for processes to exit. May 15 00:57:12.857496 systemd[1]: session-3.scope: Deactivated successfully. May 15 00:57:12.858855 systemd-logind[1293]: Removed session 3. May 15 00:57:12.889523 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 36574 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:12.890536 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:12.893785 systemd-logind[1293]: New session 4 of user core. May 15 00:57:12.894462 systemd[1]: Started session-4.scope. May 15 00:57:12.948213 sshd[1421]: pam_unix(sshd:session): session closed for user core May 15 00:57:12.950683 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:36584.service. May 15 00:57:12.951324 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:36574.service: Deactivated successfully. May 15 00:57:12.952230 systemd-logind[1293]: Session 4 logged out. Waiting for processes to exit. May 15 00:57:12.952303 systemd[1]: session-4.scope: Deactivated successfully. May 15 00:57:12.953291 systemd-logind[1293]: Removed session 4. May 15 00:57:12.983792 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 36584 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:12.984736 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:12.987816 systemd-logind[1293]: New session 5 of user core. May 15 00:57:12.988601 systemd[1]: Started session-5.scope. May 15 00:57:13.045157 sudo[1434]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 00:57:13.045340 sudo[1434]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 00:57:13.052193 dbus-daemon[1282]: \xd0=i\xcd+V: received setenforce notice (enforcing=1383926672) May 15 00:57:13.054255 sudo[1434]: pam_unix(sudo:session): session closed for user root May 15 00:57:13.055641 sshd[1428]: pam_unix(sshd:session): session closed for user core May 15 00:57:13.058085 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:36594.service. May 15 00:57:13.059125 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:36584.service: Deactivated successfully. May 15 00:57:13.060008 systemd-logind[1293]: Session 5 logged out. Waiting for processes to exit. May 15 00:57:13.060045 systemd[1]: session-5.scope: Deactivated successfully. May 15 00:57:13.061048 systemd-logind[1293]: Removed session 5. May 15 00:57:13.092851 sshd[1436]: Accepted publickey for core from 10.0.0.1 port 36594 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:13.093750 sshd[1436]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:13.096962 systemd-logind[1293]: New session 6 of user core. May 15 00:57:13.097732 systemd[1]: Started session-6.scope. May 15 00:57:13.151050 sudo[1443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 00:57:13.151255 sudo[1443]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 00:57:13.153578 sudo[1443]: pam_unix(sudo:session): session closed for user root May 15 00:57:13.157496 sudo[1442]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 15 00:57:13.157665 sudo[1442]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 00:57:13.164859 systemd[1]: Stopping audit-rules.service... May 15 00:57:13.165000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 15 00:57:13.166422 auditctl[1446]: No rules May 15 00:57:13.166733 systemd[1]: audit-rules.service: Deactivated successfully. May 15 00:57:13.166964 systemd[1]: Stopped audit-rules.service. May 15 00:57:13.173591 kernel: kauditd_printk_skb: 185 callbacks suppressed May 15 00:57:13.173639 kernel: audit: type=1305 audit(1747270633.165:153): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 15 00:57:13.173657 kernel: audit: type=1300 audit(1747270633.165:153): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc25bdf0d0 a2=420 a3=0 items=0 ppid=1 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:13.165000 audit[1446]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc25bdf0d0 a2=420 a3=0 items=0 ppid=1 pid=1446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:13.168411 systemd[1]: Starting audit-rules.service... May 15 00:57:13.165000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 15 00:57:13.175247 kernel: audit: type=1327 audit(1747270633.165:153): proctitle=2F7362696E2F617564697463746C002D44 May 15 00:57:13.175283 kernel: audit: type=1131 audit(1747270633.166:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:13.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:13.183750 augenrules[1464]: No rules May 15 00:57:13.184358 systemd[1]: Finished audit-rules.service. May 15 00:57:13.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:13.185274 sudo[1442]: pam_unix(sudo:session): session closed for user root May 15 00:57:13.183000 audit[1442]: USER_END pid=1442 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:13.188226 sshd[1436]: pam_unix(sshd:session): session closed for user core May 15 00:57:13.190416 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:36594.service: Deactivated successfully. May 15 00:57:13.191139 systemd-logind[1293]: Session 6 logged out. Waiting for processes to exit. May 15 00:57:13.191164 systemd[1]: session-6.scope: Deactivated successfully. May 15 00:57:13.191697 kernel: audit: type=1130 audit(1747270633.182:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:13.191729 kernel: audit: type=1106 audit(1747270633.183:156): pid=1442 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:13.183000 audit[1442]: CRED_DISP pid=1442 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:13.196030 kernel: audit: type=1104 audit(1747270633.183:157): pid=1442 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:13.196083 kernel: audit: type=1106 audit(1747270633.187:158): pid=1436 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.187000 audit[1436]: USER_END pid=1436 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.193671 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:36604.service. May 15 00:57:13.194783 systemd-logind[1293]: Removed session 6. May 15 00:57:13.188000 audit[1436]: CRED_DISP pid=1436 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.202934 kernel: audit: type=1104 audit(1747270633.188:159): pid=1436 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.203005 kernel: audit: type=1131 audit(1747270633.189:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.134:22-10.0.0.1:36594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:13.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.134:22-10.0.0.1:36594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:13.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.134:22-10.0.0.1:36604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:13.230000 audit[1471]: USER_ACCT pid=1471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.231684 sshd[1471]: Accepted publickey for core from 10.0.0.1 port 36604 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:57:13.231000 audit[1471]: CRED_ACQ pid=1471 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.231000 audit[1471]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd84d8c00 a2=3 a3=0 items=0 ppid=1 pid=1471 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:13.231000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:57:13.232617 sshd[1471]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:57:13.235779 systemd-logind[1293]: New session 7 of user core. May 15 00:57:13.236493 systemd[1]: Started session-7.scope. May 15 00:57:13.238000 audit[1471]: USER_START pid=1471 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.240000 audit[1474]: CRED_ACQ pid=1474 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:13.286000 audit[1475]: USER_ACCT pid=1475 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:13.287796 sudo[1475]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 00:57:13.286000 audit[1475]: CRED_REFR pid=1475 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:13.288003 sudo[1475]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 00:57:13.288000 audit[1475]: USER_START pid=1475 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:13.306589 systemd[1]: Starting docker.service... May 15 00:57:13.346474 env[1487]: time="2025-05-15T00:57:13.346420005Z" level=info msg="Starting up" May 15 00:57:13.347890 env[1487]: time="2025-05-15T00:57:13.347839432Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 00:57:13.347890 env[1487]: time="2025-05-15T00:57:13.347858338Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 00:57:13.347890 env[1487]: time="2025-05-15T00:57:13.347876832Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 00:57:13.347890 env[1487]: time="2025-05-15T00:57:13.347886315Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 00:57:13.349369 env[1487]: time="2025-05-15T00:57:13.349323783Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 15 00:57:13.349369 env[1487]: time="2025-05-15T00:57:13.349355535Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 15 00:57:13.349479 env[1487]: time="2025-05-15T00:57:13.349381620Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 15 00:57:13.349479 env[1487]: time="2025-05-15T00:57:13.349396298Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 15 00:57:13.354492 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2932846727-merged.mount: Deactivated successfully. May 15 00:57:14.001665 env[1487]: time="2025-05-15T00:57:14.001619150Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 15 00:57:14.001665 env[1487]: time="2025-05-15T00:57:14.001644658Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 15 00:57:14.001903 env[1487]: time="2025-05-15T00:57:14.001794477Z" level=info msg="Loading containers: start." May 15 00:57:14.049000 audit[1521]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1521 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.049000 audit[1521]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffcdf055430 a2=0 a3=7ffcdf05541c items=0 ppid=1487 pid=1521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.049000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 15 00:57:14.050000 audit[1523]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.050000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffddd0425c0 a2=0 a3=7ffddd0425ac items=0 ppid=1487 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.050000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 15 00:57:14.052000 audit[1525]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.052000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd6f3f6550 a2=0 a3=7ffd6f3f653c items=0 ppid=1487 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.052000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 15 00:57:14.053000 audit[1527]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.053000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff614d92c0 a2=0 a3=7fff614d92ac items=0 ppid=1487 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.053000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 15 00:57:14.055000 audit[1529]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.055000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff3323bf50 a2=0 a3=7fff3323bf3c items=0 ppid=1487 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.055000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 15 00:57:14.068000 audit[1534]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.068000 audit[1534]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd534a9580 a2=0 a3=7ffd534a956c items=0 ppid=1487 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.068000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 15 00:57:14.080000 audit[1536]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.080000 audit[1536]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffebef193b0 a2=0 a3=7ffebef1939c items=0 ppid=1487 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.080000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 15 00:57:14.082000 audit[1538]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.082000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffc2cbbdeb0 a2=0 a3=7ffc2cbbde9c items=0 ppid=1487 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.082000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 15 00:57:14.083000 audit[1540]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.083000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd64de1570 a2=0 a3=7ffd64de155c items=0 ppid=1487 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.083000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 00:57:14.092000 audit[1544]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1544 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.092000 audit[1544]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc6664fa00 a2=0 a3=7ffc6664f9ec items=0 ppid=1487 pid=1544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.092000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 15 00:57:14.097000 audit[1545]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.097000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff150b3580 a2=0 a3=7fff150b356c items=0 ppid=1487 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.097000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 00:57:14.107005 kernel: Initializing XFRM netlink socket May 15 00:57:14.133024 env[1487]: time="2025-05-15T00:57:14.132994353Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 15 00:57:14.146000 audit[1553]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.146000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc930bba50 a2=0 a3=7ffc930bba3c items=0 ppid=1487 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.146000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 15 00:57:14.157000 audit[1556]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.157000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffd2bee01f0 a2=0 a3=7ffd2bee01dc items=0 ppid=1487 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.157000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 15 00:57:14.159000 audit[1559]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.159000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff64b8feb0 a2=0 a3=7fff64b8fe9c items=0 ppid=1487 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.159000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 15 00:57:14.161000 audit[1561]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.161000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fffe52e1ae0 a2=0 a3=7fffe52e1acc items=0 ppid=1487 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.161000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 15 00:57:14.163000 audit[1563]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.163000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fffbe2e7ce0 a2=0 a3=7fffbe2e7ccc items=0 ppid=1487 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.163000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 15 00:57:14.165000 audit[1565]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.165000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff39a36bb0 a2=0 a3=7fff39a36b9c items=0 ppid=1487 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.165000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 15 00:57:14.165000 audit[1567]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.165000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe3bc5ed90 a2=0 a3=7ffe3bc5ed7c items=0 ppid=1487 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.165000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 15 00:57:14.172000 audit[1571]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.172000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc3507a550 a2=0 a3=7ffc3507a53c items=0 ppid=1487 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.172000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 15 00:57:14.174000 audit[1573]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.174000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff2eb35470 a2=0 a3=7fff2eb3545c items=0 ppid=1487 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.174000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 15 00:57:14.175000 audit[1575]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1575 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.175000 audit[1575]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffc272d5340 a2=0 a3=7ffc272d532c items=0 ppid=1487 pid=1575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.175000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 15 00:57:14.177000 audit[1577]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.177000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe89a70a30 a2=0 a3=7ffe89a70a1c items=0 ppid=1487 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.177000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 15 00:57:14.178697 systemd-networkd[1089]: docker0: Link UP May 15 00:57:14.186000 audit[1581]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.186000 audit[1581]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcc5bece50 a2=0 a3=7ffcc5bece3c items=0 ppid=1487 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.186000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 15 00:57:14.195000 audit[1582]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:14.195000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe91863cf0 a2=0 a3=7ffe91863cdc items=0 ppid=1487 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:14.195000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 15 00:57:14.197173 env[1487]: time="2025-05-15T00:57:14.197139223Z" level=info msg="Loading containers: done." May 15 00:57:14.212425 env[1487]: time="2025-05-15T00:57:14.212371891Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 00:57:14.212612 env[1487]: time="2025-05-15T00:57:14.212579850Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 15 00:57:14.212727 env[1487]: time="2025-05-15T00:57:14.212700039Z" level=info msg="Daemon has completed initialization" May 15 00:57:14.229077 systemd[1]: Started docker.service. May 15 00:57:14.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:14.233120 env[1487]: time="2025-05-15T00:57:14.233070783Z" level=info msg="API listen on /run/docker.sock" May 15 00:57:14.939527 env[1307]: time="2025-05-15T00:57:14.939465561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 15 00:57:15.686200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486226496.mount: Deactivated successfully. May 15 00:57:17.244400 env[1307]: time="2025-05-15T00:57:17.244346130Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:17.246289 env[1307]: time="2025-05-15T00:57:17.246200659Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:17.248196 env[1307]: time="2025-05-15T00:57:17.248147031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:17.249926 env[1307]: time="2025-05-15T00:57:17.249896063Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:17.250754 env[1307]: time="2025-05-15T00:57:17.250708764Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 15 00:57:17.259128 env[1307]: time="2025-05-15T00:57:17.259100454Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 15 00:57:19.127689 env[1307]: time="2025-05-15T00:57:19.127639713Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:19.130091 env[1307]: time="2025-05-15T00:57:19.130037132Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:19.131891 env[1307]: time="2025-05-15T00:57:19.131839474Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:19.133588 env[1307]: time="2025-05-15T00:57:19.133563769Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:19.134253 env[1307]: time="2025-05-15T00:57:19.134218937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 15 00:57:19.147226 env[1307]: time="2025-05-15T00:57:19.147164401Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 15 00:57:20.596678 env[1307]: time="2025-05-15T00:57:20.596617259Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:20.599135 env[1307]: time="2025-05-15T00:57:20.599083246Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:20.601123 env[1307]: time="2025-05-15T00:57:20.601078626Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:20.602784 env[1307]: time="2025-05-15T00:57:20.602756175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:20.603434 env[1307]: time="2025-05-15T00:57:20.603400412Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 15 00:57:20.611838 env[1307]: time="2025-05-15T00:57:20.611779376Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 15 00:57:20.662355 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 00:57:20.662608 systemd[1]: Stopped kubelet.service. May 15 00:57:20.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:20.663476 kernel: kauditd_printk_skb: 84 callbacks suppressed May 15 00:57:20.663520 kernel: audit: type=1130 audit(1747270640.661:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:20.664180 systemd[1]: Starting kubelet.service... May 15 00:57:20.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:20.669590 kernel: audit: type=1131 audit(1747270640.661:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:20.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:20.759618 systemd[1]: Started kubelet.service. May 15 00:57:20.762976 kernel: audit: type=1130 audit(1747270640.758:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:20.978454 kubelet[1654]: E0515 00:57:20.978341 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:57:20.981426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:57:20.981585 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:57:20.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 00:57:20.985987 kernel: audit: type=1131 audit(1747270640.980:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 00:57:22.373514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610323884.mount: Deactivated successfully. May 15 00:57:23.568625 env[1307]: time="2025-05-15T00:57:23.568555231Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:23.571495 env[1307]: time="2025-05-15T00:57:23.571447677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:23.573042 env[1307]: time="2025-05-15T00:57:23.572991078Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:23.574459 env[1307]: time="2025-05-15T00:57:23.574411029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:23.574819 env[1307]: time="2025-05-15T00:57:23.574787955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 15 00:57:23.590875 env[1307]: time="2025-05-15T00:57:23.590832276Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 15 00:57:24.153527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644086432.mount: Deactivated successfully. May 15 00:57:25.173326 env[1307]: time="2025-05-15T00:57:25.173269120Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.175600 env[1307]: time="2025-05-15T00:57:25.175580856Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.177376 env[1307]: time="2025-05-15T00:57:25.177347323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.179018 env[1307]: time="2025-05-15T00:57:25.178989154Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.179597 env[1307]: time="2025-05-15T00:57:25.179571026Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 15 00:57:25.195128 env[1307]: time="2025-05-15T00:57:25.195081188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 15 00:57:25.713047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount608524586.mount: Deactivated successfully. May 15 00:57:25.717920 env[1307]: time="2025-05-15T00:57:25.717874485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.720329 env[1307]: time="2025-05-15T00:57:25.720273137Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.721833 env[1307]: time="2025-05-15T00:57:25.721804964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.723315 env[1307]: time="2025-05-15T00:57:25.723279871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:25.723702 env[1307]: time="2025-05-15T00:57:25.723670616Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 15 00:57:25.740278 env[1307]: time="2025-05-15T00:57:25.740237140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 15 00:57:26.200089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793500690.mount: Deactivated successfully. May 15 00:57:29.271649 env[1307]: time="2025-05-15T00:57:29.271594305Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:29.274075 env[1307]: time="2025-05-15T00:57:29.274012895Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:29.276208 env[1307]: time="2025-05-15T00:57:29.276180800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:29.278234 env[1307]: time="2025-05-15T00:57:29.278192533Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:29.279070 env[1307]: time="2025-05-15T00:57:29.279030760Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 15 00:57:31.232493 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 15 00:57:31.232737 systemd[1]: Stopped kubelet.service. May 15 00:57:31.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.234192 systemd[1]: Starting kubelet.service... May 15 00:57:31.240000 kernel: audit: type=1130 audit(1747270651.231:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.240151 kernel: audit: type=1131 audit(1747270651.231:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.309838 systemd[1]: Started kubelet.service. May 15 00:57:31.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.316807 kernel: audit: type=1130 audit(1747270651.309:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.361596 kubelet[1765]: E0515 00:57:31.361560 1765 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 00:57:31.364096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 00:57:31.364277 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 00:57:31.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 00:57:31.369018 kernel: audit: type=1131 audit(1747270651.363:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 15 00:57:31.461320 systemd[1]: Stopped kubelet.service. May 15 00:57:31.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.463273 systemd[1]: Starting kubelet.service... May 15 00:57:31.468528 kernel: audit: type=1130 audit(1747270651.460:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.468594 kernel: audit: type=1131 audit(1747270651.460:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:31.478602 systemd[1]: Reloading. May 15 00:57:31.544876 /usr/lib/systemd/system-generators/torcx-generator[1803]: time="2025-05-15T00:57:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:57:31.545234 /usr/lib/systemd/system-generators/torcx-generator[1803]: time="2025-05-15T00:57:31Z" level=info msg="torcx already run" May 15 00:57:32.410194 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:57:32.410224 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:57:32.429633 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:57:32.495671 systemd[1]: Started kubelet.service. May 15 00:57:32.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:32.497841 systemd[1]: Stopping kubelet.service... May 15 00:57:32.498149 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:57:32.498342 systemd[1]: Stopped kubelet.service. May 15 00:57:32.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:32.499888 systemd[1]: Starting kubelet.service... May 15 00:57:32.502696 kernel: audit: type=1130 audit(1747270652.494:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:32.502752 kernel: audit: type=1131 audit(1747270652.497:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:32.571563 systemd[1]: Started kubelet.service. May 15 00:57:32.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:32.578978 kernel: audit: type=1130 audit(1747270652.571:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:32.615477 kubelet[1862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:57:32.615477 kubelet[1862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:57:32.615477 kubelet[1862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:57:32.615826 kubelet[1862]: I0515 00:57:32.615522 1862 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:57:32.986879 kubelet[1862]: I0515 00:57:32.986847 1862 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:57:32.986879 kubelet[1862]: I0515 00:57:32.986871 1862 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:57:32.987074 kubelet[1862]: I0515 00:57:32.987064 1862 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:57:33.001337 kubelet[1862]: E0515 00:57:33.001294 1862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.002764 kubelet[1862]: I0515 00:57:33.002729 1862 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:57:33.016288 kubelet[1862]: I0515 00:57:33.016242 1862 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:57:33.018244 kubelet[1862]: I0515 00:57:33.018201 1862 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:57:33.018434 kubelet[1862]: I0515 00:57:33.018239 1862 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:57:33.018595 kubelet[1862]: I0515 00:57:33.018438 1862 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:57:33.018595 kubelet[1862]: I0515 00:57:33.018448 1862 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:57:33.018595 kubelet[1862]: I0515 00:57:33.018565 1862 state_mem.go:36] "Initialized new in-memory state store" May 15 00:57:33.019211 kubelet[1862]: I0515 00:57:33.019192 1862 kubelet.go:400] "Attempting to sync node with API server" May 15 00:57:33.019211 kubelet[1862]: I0515 00:57:33.019209 1862 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:57:33.019276 kubelet[1862]: I0515 00:57:33.019228 1862 kubelet.go:312] "Adding apiserver pod source" May 15 00:57:33.019276 kubelet[1862]: I0515 00:57:33.019241 1862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:57:33.019797 kubelet[1862]: W0515 00:57:33.019743 1862 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.019837 kubelet[1862]: E0515 00:57:33.019800 1862 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.020008 kubelet[1862]: W0515 00:57:33.019982 1862 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.020008 kubelet[1862]: E0515 00:57:33.020007 1862 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.022273 kubelet[1862]: I0515 00:57:33.022247 1862 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 00:57:33.026413 kubelet[1862]: I0515 00:57:33.026390 1862 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:57:33.026495 kubelet[1862]: W0515 00:57:33.026437 1862 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 00:57:33.026904 kubelet[1862]: I0515 00:57:33.026886 1862 server.go:1264] "Started kubelet" May 15 00:57:33.026000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:33.027886 kubelet[1862]: I0515 00:57:33.027783 1862 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 15 00:57:33.027886 kubelet[1862]: I0515 00:57:33.027810 1862 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 15 00:57:33.027886 kubelet[1862]: I0515 00:57:33.027853 1862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:57:33.026000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:33.026000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000af4510 a1=c000a92ca8 a2=c000af44e0 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:33.026000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:33.026000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:33.026000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b96140 a1=c000a92cc0 a2=c000af45a0 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.026000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:33.031980 kernel: audit: type=1400 audit(1747270653.026:208): avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:33.040000 audit[1874]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.040000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff78337e70 a2=0 a3=7fff78337e5c items=0 ppid=1862 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.040000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 15 00:57:33.041534 kubelet[1862]: I0515 00:57:33.041478 1862 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:57:33.041000 audit[1875]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.041000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee96b81f0 a2=0 a3=7ffee96b81dc items=0 ppid=1862 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 15 00:57:33.042878 kubelet[1862]: I0515 00:57:33.042538 1862 server.go:455] "Adding debug handlers to kubelet server" May 15 00:57:33.043350 kubelet[1862]: I0515 00:57:33.043277 1862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:57:33.043503 kubelet[1862]: I0515 00:57:33.043490 1862 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:57:33.044832 kubelet[1862]: I0515 00:57:33.044821 1862 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:57:33.045223 kubelet[1862]: I0515 00:57:33.045211 1862 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:57:33.045275 kubelet[1862]: I0515 00:57:33.045252 1862 reconciler.go:26] "Reconciler: start to sync state" May 15 00:57:33.045531 kubelet[1862]: W0515 00:57:33.045491 1862 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.045577 kubelet[1862]: E0515 00:57:33.045534 1862 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.045577 kubelet[1862]: E0515 00:57:33.045568 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" May 15 00:57:33.046296 kubelet[1862]: E0515 00:57:33.046211 1862 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f8d66c8219680 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 00:57:33.026866816 +0000 UTC m=+0.447704863,LastTimestamp:2025-05-15 00:57:33.026866816 +0000 UTC m=+0.447704863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 00:57:33.046409 kubelet[1862]: I0515 00:57:33.046341 1862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:57:33.045000 audit[1877]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.045000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc91b888e0 a2=0 a3=7ffc91b888cc items=0 ppid=1862 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.045000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 00:57:33.049137 kubelet[1862]: E0515 00:57:33.049116 1862 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:57:33.048000 audit[1879]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.048000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff3099d680 a2=0 a3=7fff3099d66c items=0 ppid=1862 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 00:57:33.050585 kubelet[1862]: I0515 00:57:33.050565 1862 factory.go:221] Registration of the containerd container factory successfully May 15 00:57:33.050585 kubelet[1862]: I0515 00:57:33.050582 1862 factory.go:221] Registration of the systemd container factory successfully May 15 00:57:33.056000 audit[1885]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.056000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc16ec0710 a2=0 a3=7ffc16ec06fc items=0 ppid=1862 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 15 00:57:33.057000 audit[1886]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:33.057000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc347aec30 a2=0 a3=7ffc347aec1c items=0 ppid=1862 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.057000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 15 00:57:33.058858 kubelet[1862]: I0515 00:57:33.057740 1862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:57:33.058858 kubelet[1862]: I0515 00:57:33.058545 1862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:57:33.058858 kubelet[1862]: I0515 00:57:33.058563 1862 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:57:33.058858 kubelet[1862]: I0515 00:57:33.058577 1862 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:57:33.058858 kubelet[1862]: E0515 00:57:33.058619 1862 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:57:33.058000 audit[1888]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.058000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc219f2690 a2=0 a3=7ffc219f267c items=0 ppid=1862 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 15 00:57:33.058000 audit[1889]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:33.058000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6f0a97a0 a2=0 a3=7ffc6f0a978c items=0 ppid=1862 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 15 00:57:33.059941 kubelet[1862]: W0515 00:57:33.059893 1862 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.059941 kubelet[1862]: E0515 00:57:33.059941 1862 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.059000 audit[1891]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:33.059000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffd34e5ce60 a2=0 a3=7ffd34e5ce4c items=0 ppid=1862 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.059000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 15 00:57:33.059000 audit[1892]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.059000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdb90f650 a2=0 a3=7ffcdb90f63c items=0 ppid=1862 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.059000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 15 00:57:33.060000 audit[1893]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:33.060000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd18430e60 a2=0 a3=7ffd18430e4c items=0 ppid=1862 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.060000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 15 00:57:33.065000 audit[1894]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:33.065000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0aed21c0 a2=0 a3=7ffc0aed21ac items=0 ppid=1862 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 15 00:57:33.069044 kubelet[1862]: I0515 00:57:33.069015 1862 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:57:33.069044 kubelet[1862]: I0515 00:57:33.069034 1862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:57:33.069120 kubelet[1862]: I0515 00:57:33.069066 1862 state_mem.go:36] "Initialized new in-memory state store" May 15 00:57:33.146579 kubelet[1862]: I0515 00:57:33.146541 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:57:33.146994 kubelet[1862]: E0515 00:57:33.146927 1862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" May 15 00:57:33.159105 kubelet[1862]: E0515 00:57:33.159061 1862 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 15 00:57:33.246822 kubelet[1862]: E0515 00:57:33.246692 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" May 15 00:57:33.305145 kubelet[1862]: I0515 00:57:33.305106 1862 policy_none.go:49] "None policy: Start" May 15 00:57:33.306065 kubelet[1862]: I0515 00:57:33.306028 1862 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:57:33.306065 kubelet[1862]: I0515 00:57:33.306068 1862 state_mem.go:35] "Initializing new in-memory state store" May 15 00:57:33.311151 kubelet[1862]: I0515 00:57:33.311129 1862 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:57:33.310000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:33.310000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:33.310000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00072fcb0 a1=c000744180 a2=c00072fc80 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:33.310000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:33.311372 kubelet[1862]: I0515 00:57:33.311201 1862 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 15 00:57:33.311372 kubelet[1862]: I0515 00:57:33.311292 1862 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:57:33.311421 kubelet[1862]: I0515 00:57:33.311398 1862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:57:33.312575 kubelet[1862]: E0515 00:57:33.312556 1862 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 00:57:33.348783 kubelet[1862]: I0515 00:57:33.348759 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:57:33.349055 kubelet[1862]: E0515 00:57:33.349036 1862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" May 15 00:57:33.360218 kubelet[1862]: I0515 00:57:33.360171 1862 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 00:57:33.360976 kubelet[1862]: I0515 00:57:33.360936 1862 topology_manager.go:215] "Topology Admit Handler" podUID="8828716f8b459e585a00cdf8ee259890" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 00:57:33.361527 kubelet[1862]: I0515 00:57:33.361490 1862 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 00:57:33.447307 kubelet[1862]: I0515 00:57:33.447273 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:33.447307 kubelet[1862]: I0515 00:57:33.447305 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:33.447307 kubelet[1862]: I0515 00:57:33.447325 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8828716f8b459e585a00cdf8ee259890-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828716f8b459e585a00cdf8ee259890\") " pod="kube-system/kube-apiserver-localhost" May 15 00:57:33.447518 kubelet[1862]: I0515 00:57:33.447340 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8828716f8b459e585a00cdf8ee259890-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828716f8b459e585a00cdf8ee259890\") " pod="kube-system/kube-apiserver-localhost" May 15 00:57:33.447518 kubelet[1862]: I0515 00:57:33.447356 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8828716f8b459e585a00cdf8ee259890-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8828716f8b459e585a00cdf8ee259890\") " pod="kube-system/kube-apiserver-localhost" May 15 00:57:33.447518 kubelet[1862]: I0515 00:57:33.447368 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:33.447518 kubelet[1862]: I0515 00:57:33.447380 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:33.447518 kubelet[1862]: I0515 00:57:33.447392 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 00:57:33.447627 kubelet[1862]: I0515 00:57:33.447404 1862 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:33.647775 kubelet[1862]: E0515 00:57:33.647650 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" May 15 00:57:33.664969 kubelet[1862]: E0515 00:57:33.664919 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:33.664969 kubelet[1862]: E0515 00:57:33.664919 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:33.665458 kubelet[1862]: E0515 00:57:33.665441 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:33.665553 env[1307]: time="2025-05-15T00:57:33.665514756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,}" May 15 00:57:33.665804 env[1307]: time="2025-05-15T00:57:33.665609558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8828716f8b459e585a00cdf8ee259890,Namespace:kube-system,Attempt:0,}" May 15 00:57:33.665873 env[1307]: time="2025-05-15T00:57:33.665853223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,}" May 15 00:57:33.750297 kubelet[1862]: I0515 00:57:33.750265 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:57:33.750559 kubelet[1862]: E0515 00:57:33.750532 1862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" May 15 00:57:33.966997 kubelet[1862]: W0515 00:57:33.966859 1862 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:33.966997 kubelet[1862]: E0515 00:57:33.966916 1862 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:34.061083 kubelet[1862]: W0515 00:57:34.061033 1862 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:34.061083 kubelet[1862]: E0515 00:57:34.061076 1862 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:34.151396 kubelet[1862]: W0515 00:57:34.151300 1862 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:34.151528 kubelet[1862]: E0515 00:57:34.151400 1862 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused May 15 00:57:34.281720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198320441.mount: Deactivated successfully. May 15 00:57:34.288531 env[1307]: time="2025-05-15T00:57:34.288494923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.291434 env[1307]: time="2025-05-15T00:57:34.291407441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.292433 env[1307]: time="2025-05-15T00:57:34.292399387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.293381 env[1307]: time="2025-05-15T00:57:34.293327160Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.296295 env[1307]: time="2025-05-15T00:57:34.296254972Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.297276 env[1307]: time="2025-05-15T00:57:34.297258941Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.298431 env[1307]: time="2025-05-15T00:57:34.298381964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.299232 env[1307]: time="2025-05-15T00:57:34.299212167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.300750 env[1307]: time="2025-05-15T00:57:34.300723624Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.302516 env[1307]: time="2025-05-15T00:57:34.302493962Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.303728 env[1307]: time="2025-05-15T00:57:34.303708775Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.306116 env[1307]: time="2025-05-15T00:57:34.306088589Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:34.330155 env[1307]: time="2025-05-15T00:57:34.330094809Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:57:34.330155 env[1307]: time="2025-05-15T00:57:34.330131707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:57:34.330155 env[1307]: time="2025-05-15T00:57:34.330143047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:57:34.330329 env[1307]: time="2025-05-15T00:57:34.330284309Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a53b9acb50f2dffea56f07774462ba78957a97c49c83f762687a1e3d990b5bf pid=1903 runtime=io.containerd.runc.v2 May 15 00:57:34.338646 env[1307]: time="2025-05-15T00:57:34.338581923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:57:34.338646 env[1307]: time="2025-05-15T00:57:34.338620919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:57:34.338882 env[1307]: time="2025-05-15T00:57:34.338857663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:57:34.339059 env[1307]: time="2025-05-15T00:57:34.339026984Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d355b81d2270b5748b4adba893e73b07aff51e29344a4d2bcf8cfff0c7ea31e9 pid=1926 runtime=io.containerd.runc.v2 May 15 00:57:34.345869 env[1307]: time="2025-05-15T00:57:34.345522597Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:57:34.345869 env[1307]: time="2025-05-15T00:57:34.345563390Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:57:34.345869 env[1307]: time="2025-05-15T00:57:34.345575572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:57:34.345869 env[1307]: time="2025-05-15T00:57:34.345750663Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40c5bfddd20e56e3fa53550bbaa1b7df46fea03c29c607abdc10bc99aef6d9da pid=1951 runtime=io.containerd.runc.v2 May 15 00:57:34.385626 env[1307]: time="2025-05-15T00:57:34.385567214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8828716f8b459e585a00cdf8ee259890,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a53b9acb50f2dffea56f07774462ba78957a97c49c83f762687a1e3d990b5bf\"" May 15 00:57:34.386688 kubelet[1862]: E0515 00:57:34.386654 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:34.392484 env[1307]: time="2025-05-15T00:57:34.392432748Z" level=info msg="CreateContainer within sandbox \"6a53b9acb50f2dffea56f07774462ba78957a97c49c83f762687a1e3d990b5bf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 00:57:34.396848 env[1307]: time="2025-05-15T00:57:34.396807833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6ece95f10dbffa04b25ec3439a115512,Namespace:kube-system,Attempt:0,} returns sandbox id \"d355b81d2270b5748b4adba893e73b07aff51e29344a4d2bcf8cfff0c7ea31e9\"" May 15 00:57:34.397820 kubelet[1862]: E0515 00:57:34.397608 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:34.399536 env[1307]: time="2025-05-15T00:57:34.399503405Z" level=info msg="CreateContainer within sandbox \"d355b81d2270b5748b4adba893e73b07aff51e29344a4d2bcf8cfff0c7ea31e9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 00:57:34.401427 env[1307]: time="2025-05-15T00:57:34.401387531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b20b39a8540dba87b5883a6f0f602dba,Namespace:kube-system,Attempt:0,} returns sandbox id \"40c5bfddd20e56e3fa53550bbaa1b7df46fea03c29c607abdc10bc99aef6d9da\"" May 15 00:57:34.402196 kubelet[1862]: E0515 00:57:34.402160 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:34.403887 env[1307]: time="2025-05-15T00:57:34.403855902Z" level=info msg="CreateContainer within sandbox \"40c5bfddd20e56e3fa53550bbaa1b7df46fea03c29c607abdc10bc99aef6d9da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 00:57:34.420057 env[1307]: time="2025-05-15T00:57:34.420015250Z" level=info msg="CreateContainer within sandbox \"6a53b9acb50f2dffea56f07774462ba78957a97c49c83f762687a1e3d990b5bf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9abbf756dabeae346345fe57686a936c3d76148a8c9af57b6123566009b07b34\"" May 15 00:57:34.420652 env[1307]: time="2025-05-15T00:57:34.420627855Z" level=info msg="StartContainer for \"9abbf756dabeae346345fe57686a936c3d76148a8c9af57b6123566009b07b34\"" May 15 00:57:34.448363 kubelet[1862]: E0515 00:57:34.448312 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="1.6s" May 15 00:57:34.455904 env[1307]: time="2025-05-15T00:57:34.455860163Z" level=info msg="CreateContainer within sandbox \"d355b81d2270b5748b4adba893e73b07aff51e29344a4d2bcf8cfff0c7ea31e9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e9a11964c62af9f32c63ec9ad008a84f287e696f54b8d72ffdc7612be87b8ae1\"" May 15 00:57:34.456450 env[1307]: time="2025-05-15T00:57:34.456428955Z" level=info msg="StartContainer for \"e9a11964c62af9f32c63ec9ad008a84f287e696f54b8d72ffdc7612be87b8ae1\"" May 15 00:57:34.471253 env[1307]: time="2025-05-15T00:57:34.471203608Z" level=info msg="CreateContainer within sandbox \"40c5bfddd20e56e3fa53550bbaa1b7df46fea03c29c607abdc10bc99aef6d9da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9b78fba023b43cbcdf4156643f6f91eb2e99458f1636ae1ec4872e31e54d5d3\"" May 15 00:57:34.479989 env[1307]: time="2025-05-15T00:57:34.478457866Z" level=info msg="StartContainer for \"9abbf756dabeae346345fe57686a936c3d76148a8c9af57b6123566009b07b34\" returns successfully" May 15 00:57:34.479989 env[1307]: time="2025-05-15T00:57:34.478745970Z" level=info msg="StartContainer for \"a9b78fba023b43cbcdf4156643f6f91eb2e99458f1636ae1ec4872e31e54d5d3\"" May 15 00:57:34.539201 env[1307]: time="2025-05-15T00:57:34.539089790Z" level=info msg="StartContainer for \"e9a11964c62af9f32c63ec9ad008a84f287e696f54b8d72ffdc7612be87b8ae1\" returns successfully" May 15 00:57:34.550259 env[1307]: time="2025-05-15T00:57:34.550215648Z" level=info msg="StartContainer for \"a9b78fba023b43cbcdf4156643f6f91eb2e99458f1636ae1ec4872e31e54d5d3\" returns successfully" May 15 00:57:34.551692 kubelet[1862]: I0515 00:57:34.551455 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:57:35.064504 kubelet[1862]: E0515 00:57:35.064427 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:35.066638 kubelet[1862]: E0515 00:57:35.066624 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:35.067985 kubelet[1862]: E0515 00:57:35.067972 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:35.546182 kubelet[1862]: I0515 00:57:35.546061 1862 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 00:57:35.559849 kubelet[1862]: E0515 00:57:35.559616 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:35.659784 kubelet[1862]: E0515 00:57:35.659730 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:35.760354 kubelet[1862]: E0515 00:57:35.760315 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:35.861159 kubelet[1862]: E0515 00:57:35.861006 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:35.961649 kubelet[1862]: E0515 00:57:35.961602 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.062704 kubelet[1862]: E0515 00:57:36.062658 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.070663 kubelet[1862]: E0515 00:57:36.070634 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:36.075165 kubelet[1862]: E0515 00:57:36.075131 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:36.163549 kubelet[1862]: E0515 00:57:36.163408 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.264087 kubelet[1862]: E0515 00:57:36.264027 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.364986 kubelet[1862]: E0515 00:57:36.364886 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.466097 kubelet[1862]: E0515 00:57:36.465947 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.476677 kubelet[1862]: E0515 00:57:36.476646 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:36.566780 kubelet[1862]: E0515 00:57:36.566730 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.667333 kubelet[1862]: E0515 00:57:36.667281 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.767942 kubelet[1862]: E0515 00:57:36.767814 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.868378 kubelet[1862]: E0515 00:57:36.868326 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:36.969168 kubelet[1862]: E0515 00:57:36.969057 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:37.069511 kubelet[1862]: E0515 00:57:37.069399 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:37.170003 kubelet[1862]: E0515 00:57:37.169935 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:37.270492 kubelet[1862]: E0515 00:57:37.270455 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 00:57:37.527235 systemd[1]: Reloading. May 15 00:57:37.595942 /usr/lib/systemd/system-generators/torcx-generator[2155]: time="2025-05-15T00:57:37Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 15 00:57:37.595987 /usr/lib/systemd/system-generators/torcx-generator[2155]: time="2025-05-15T00:57:37Z" level=info msg="torcx already run" May 15 00:57:37.679002 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 00:57:37.679017 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 00:57:37.700558 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 00:57:37.773748 systemd[1]: Stopping kubelet.service... May 15 00:57:37.794474 systemd[1]: kubelet.service: Deactivated successfully. May 15 00:57:37.794682 systemd[1]: Stopped kubelet.service. May 15 00:57:37.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:37.795639 kernel: kauditd_printk_skb: 47 callbacks suppressed May 15 00:57:37.795697 kernel: audit: type=1131 audit(1747270657.793:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:37.796161 systemd[1]: Starting kubelet.service... May 15 00:57:37.872365 systemd[1]: Started kubelet.service. May 15 00:57:37.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:37.878972 kernel: audit: type=1130 audit(1747270657.873:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:37.916760 kubelet[2211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:57:37.916760 kubelet[2211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 00:57:37.916760 kubelet[2211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 00:57:37.917188 kubelet[2211]: I0515 00:57:37.916788 2211 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 00:57:37.920519 kubelet[2211]: I0515 00:57:37.920486 2211 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 15 00:57:37.920519 kubelet[2211]: I0515 00:57:37.920512 2211 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 00:57:37.920699 kubelet[2211]: I0515 00:57:37.920686 2211 server.go:927] "Client rotation is on, will bootstrap in background" May 15 00:57:37.921816 kubelet[2211]: I0515 00:57:37.921794 2211 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 00:57:37.924610 kubelet[2211]: I0515 00:57:37.924580 2211 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 00:57:37.936065 kubelet[2211]: I0515 00:57:37.936027 2211 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 00:57:37.936429 kubelet[2211]: I0515 00:57:37.936394 2211 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 00:57:37.936581 kubelet[2211]: I0515 00:57:37.936421 2211 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 15 00:57:37.936667 kubelet[2211]: I0515 00:57:37.936585 2211 topology_manager.go:138] "Creating topology manager with none policy" May 15 00:57:37.936667 kubelet[2211]: I0515 00:57:37.936594 2211 container_manager_linux.go:301] "Creating device plugin manager" May 15 00:57:37.936667 kubelet[2211]: I0515 00:57:37.936623 2211 state_mem.go:36] "Initialized new in-memory state store" May 15 00:57:37.936737 kubelet[2211]: I0515 00:57:37.936690 2211 kubelet.go:400] "Attempting to sync node with API server" May 15 00:57:37.936737 kubelet[2211]: I0515 00:57:37.936701 2211 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 00:57:37.936737 kubelet[2211]: I0515 00:57:37.936718 2211 kubelet.go:312] "Adding apiserver pod source" May 15 00:57:37.936737 kubelet[2211]: I0515 00:57:37.936729 2211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 00:57:37.941579 kubelet[2211]: I0515 00:57:37.941560 2211 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 00:57:37.941700 kubelet[2211]: I0515 00:57:37.941676 2211 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 00:57:37.942014 kubelet[2211]: I0515 00:57:37.941997 2211 server.go:1264] "Started kubelet" May 15 00:57:37.947000 audit[2211]: AVC avc: denied { mac_admin } for pid=2211 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:37.948505 kubelet[2211]: I0515 00:57:37.948108 2211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 00:57:37.948505 kubelet[2211]: I0515 00:57:37.948488 2211 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 00:57:37.948610 kubelet[2211]: I0515 00:57:37.948570 2211 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 00:57:37.951254 kubelet[2211]: E0515 00:57:37.951229 2211 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 00:57:37.951254 kubelet[2211]: I0515 00:57:37.951246 2211 server.go:455] "Adding debug handlers to kubelet server" May 15 00:57:37.947000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:37.953477 kernel: audit: type=1400 audit(1747270657.947:225): avc: denied { mac_admin } for pid=2211 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:37.953552 kernel: audit: type=1401 audit(1747270657.947:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:37.953610 kernel: audit: type=1300 audit(1747270657.947:225): arch=c000003e syscall=188 success=no exit=-22 a0=c000b745a0 a1=c000e42300 a2=c000b74570 a3=25 items=0 ppid=1 pid=2211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:37.947000 audit[2211]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b745a0 a1=c000e42300 a2=c000b74570 a3=25 items=0 ppid=1 pid=2211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:37.956054 kubelet[2211]: I0515 00:57:37.956017 2211 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 15 00:57:37.956158 kubelet[2211]: I0515 00:57:37.956085 2211 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 15 00:57:37.956158 kubelet[2211]: I0515 00:57:37.956124 2211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 00:57:37.956805 kubelet[2211]: I0515 00:57:37.956787 2211 volume_manager.go:291] "Starting Kubelet Volume Manager" May 15 00:57:37.957032 kubelet[2211]: I0515 00:57:37.957016 2211 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 00:57:37.957662 kubelet[2211]: I0515 00:57:37.957649 2211 reconciler.go:26] "Reconciler: start to sync state" May 15 00:57:37.964011 kernel: audit: type=1327 audit(1747270657.947:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:37.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:37.964205 kubelet[2211]: I0515 00:57:37.962640 2211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 00:57:37.955000 audit[2211]: AVC avc: denied { mac_admin } for pid=2211 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:37.965708 kubelet[2211]: I0515 00:57:37.965604 2211 factory.go:221] Registration of the containerd container factory successfully May 15 00:57:37.965708 kubelet[2211]: I0515 00:57:37.965621 2211 factory.go:221] Registration of the systemd container factory successfully May 15 00:57:37.955000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:37.978211 kernel: audit: type=1400 audit(1747270657.955:226): avc: denied { mac_admin } for pid=2211 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:37.978258 kernel: audit: type=1401 audit(1747270657.955:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:37.978285 kernel: audit: type=1300 audit(1747270657.955:226): arch=c000003e syscall=188 success=no exit=-22 a0=c000d5a260 a1=c000ce2000 a2=c000d11410 a3=25 items=0 ppid=1 pid=2211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:37.978300 kernel: audit: type=1327 audit(1747270657.955:226): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:37.955000 audit[2211]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d5a260 a1=c000ce2000 a2=c000d11410 a3=25 items=0 ppid=1 pid=2211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:37.955000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:37.978410 kubelet[2211]: I0515 00:57:37.972136 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 00:57:37.978410 kubelet[2211]: I0515 00:57:37.973005 2211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 00:57:37.978410 kubelet[2211]: I0515 00:57:37.973031 2211 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 00:57:37.978410 kubelet[2211]: I0515 00:57:37.973083 2211 kubelet.go:2337] "Starting kubelet main sync loop" May 15 00:57:37.978410 kubelet[2211]: E0515 00:57:37.973172 2211 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 00:57:38.010211 kubelet[2211]: I0515 00:57:38.010181 2211 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 00:57:38.010211 kubelet[2211]: I0515 00:57:38.010201 2211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 00:57:38.010211 kubelet[2211]: I0515 00:57:38.010223 2211 state_mem.go:36] "Initialized new in-memory state store" May 15 00:57:38.010396 kubelet[2211]: I0515 00:57:38.010379 2211 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 00:57:38.010420 kubelet[2211]: I0515 00:57:38.010389 2211 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 00:57:38.010420 kubelet[2211]: I0515 00:57:38.010412 2211 policy_none.go:49] "None policy: Start" May 15 00:57:38.011131 kubelet[2211]: I0515 00:57:38.011103 2211 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 00:57:38.011131 kubelet[2211]: I0515 00:57:38.011134 2211 state_mem.go:35] "Initializing new in-memory state store" May 15 00:57:38.011312 kubelet[2211]: I0515 00:57:38.011301 2211 state_mem.go:75] "Updated machine memory state" May 15 00:57:38.012382 kubelet[2211]: I0515 00:57:38.012319 2211 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 00:57:38.012382 kubelet[2211]: I0515 00:57:38.012378 2211 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 15 00:57:38.010000 audit[2211]: AVC avc: denied { mac_admin } for pid=2211 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:57:38.010000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 15 00:57:38.010000 audit[2211]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c3fbf0 a1=c000574ae0 a2=c000c3fbc0 a3=25 items=0 ppid=1 pid=2211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:38.010000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 15 00:57:38.012646 kubelet[2211]: I0515 00:57:38.012495 2211 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 00:57:38.013309 kubelet[2211]: I0515 00:57:38.013295 2211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 00:57:38.073629 kubelet[2211]: I0515 00:57:38.073452 2211 topology_manager.go:215] "Topology Admit Handler" podUID="6ece95f10dbffa04b25ec3439a115512" podNamespace="kube-system" podName="kube-scheduler-localhost" May 15 00:57:38.073629 kubelet[2211]: I0515 00:57:38.073556 2211 topology_manager.go:215] "Topology Admit Handler" podUID="8828716f8b459e585a00cdf8ee259890" podNamespace="kube-system" podName="kube-apiserver-localhost" May 15 00:57:38.073629 kubelet[2211]: I0515 00:57:38.073611 2211 topology_manager.go:215] "Topology Admit Handler" podUID="b20b39a8540dba87b5883a6f0f602dba" podNamespace="kube-system" podName="kube-controller-manager-localhost" May 15 00:57:38.118946 kubelet[2211]: I0515 00:57:38.118881 2211 kubelet_node_status.go:73] "Attempting to register node" node="localhost" May 15 00:57:38.123550 kubelet[2211]: I0515 00:57:38.123520 2211 kubelet_node_status.go:112] "Node was previously registered" node="localhost" May 15 00:57:38.123602 kubelet[2211]: I0515 00:57:38.123580 2211 kubelet_node_status.go:76] "Successfully registered node" node="localhost" May 15 00:57:38.158486 kubelet[2211]: I0515 00:57:38.158457 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:38.158559 kubelet[2211]: I0515 00:57:38.158492 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8828716f8b459e585a00cdf8ee259890-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828716f8b459e585a00cdf8ee259890\") " pod="kube-system/kube-apiserver-localhost" May 15 00:57:38.158559 kubelet[2211]: I0515 00:57:38.158513 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:38.158559 kubelet[2211]: I0515 00:57:38.158528 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:38.158559 kubelet[2211]: I0515 00:57:38.158545 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:38.158675 kubelet[2211]: I0515 00:57:38.158582 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ece95f10dbffa04b25ec3439a115512-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6ece95f10dbffa04b25ec3439a115512\") " pod="kube-system/kube-scheduler-localhost" May 15 00:57:38.158675 kubelet[2211]: I0515 00:57:38.158604 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8828716f8b459e585a00cdf8ee259890-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8828716f8b459e585a00cdf8ee259890\") " pod="kube-system/kube-apiserver-localhost" May 15 00:57:38.158675 kubelet[2211]: I0515 00:57:38.158647 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8828716f8b459e585a00cdf8ee259890-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8828716f8b459e585a00cdf8ee259890\") " pod="kube-system/kube-apiserver-localhost" May 15 00:57:38.158748 kubelet[2211]: I0515 00:57:38.158694 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b20b39a8540dba87b5883a6f0f602dba-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b20b39a8540dba87b5883a6f0f602dba\") " pod="kube-system/kube-controller-manager-localhost" May 15 00:57:38.383912 kubelet[2211]: E0515 00:57:38.383797 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:38.383912 kubelet[2211]: E0515 00:57:38.383826 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:38.384050 kubelet[2211]: E0515 00:57:38.384020 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:38.937610 kubelet[2211]: I0515 00:57:38.937558 2211 apiserver.go:52] "Watching apiserver" May 15 00:57:38.957774 kubelet[2211]: I0515 00:57:38.957745 2211 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 00:57:38.983769 kubelet[2211]: E0515 00:57:38.983729 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:38.983906 kubelet[2211]: E0515 00:57:38.983885 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:39.000865 kubelet[2211]: E0515 00:57:39.000805 2211 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 00:57:39.001317 kubelet[2211]: E0515 00:57:39.001293 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:39.042977 kubelet[2211]: I0515 00:57:39.042894 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.042876018 podStartE2EDuration="1.042876018s" podCreationTimestamp="2025-05-15 00:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:57:39.029348047 +0000 UTC m=+1.152187118" watchObservedRunningTime="2025-05-15 00:57:39.042876018 +0000 UTC m=+1.165715089" May 15 00:57:39.057634 kubelet[2211]: I0515 00:57:39.057569 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.057534244 podStartE2EDuration="1.057534244s" podCreationTimestamp="2025-05-15 00:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:57:39.043112326 +0000 UTC m=+1.165951397" watchObservedRunningTime="2025-05-15 00:57:39.057534244 +0000 UTC m=+1.180373315" May 15 00:57:39.064863 kubelet[2211]: I0515 00:57:39.064804 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.064782527 podStartE2EDuration="1.064782527s" podCreationTimestamp="2025-05-15 00:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:57:39.058318071 +0000 UTC m=+1.181157142" watchObservedRunningTime="2025-05-15 00:57:39.064782527 +0000 UTC m=+1.187621598" May 15 00:57:39.990841 kubelet[2211]: E0515 00:57:39.990799 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:39.991744 kubelet[2211]: E0515 00:57:39.991715 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:40.992112 kubelet[2211]: E0515 00:57:40.992070 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:42.763256 sudo[1475]: pam_unix(sudo:session): session closed for user root May 15 00:57:42.762000 audit[1475]: USER_END pid=1475 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:42.762000 audit[1475]: CRED_DISP pid=1475 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 15 00:57:42.765014 sshd[1471]: pam_unix(sshd:session): session closed for user core May 15 00:57:42.764000 audit[1471]: USER_END pid=1471 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:42.765000 audit[1471]: CRED_DISP pid=1471 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:57:42.767576 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:36604.service: Deactivated successfully. May 15 00:57:42.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.134:22-10.0.0.1:36604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:57:42.768453 systemd-logind[1293]: Session 7 logged out. Waiting for processes to exit. May 15 00:57:42.768487 systemd[1]: session-7.scope: Deactivated successfully. May 15 00:57:42.769247 systemd-logind[1293]: Removed session 7. May 15 00:57:44.320743 kubelet[2211]: E0515 00:57:44.320711 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:44.999441 kubelet[2211]: E0515 00:57:44.999393 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:46.415207 kubelet[2211]: E0515 00:57:46.415167 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:47.002080 kubelet[2211]: E0515 00:57:47.002059 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:48.003784 kubelet[2211]: E0515 00:57:48.003747 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:49.357358 kubelet[2211]: E0515 00:57:49.357326 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:53.068970 update_engine[1294]: I0515 00:57:53.068910 1294 update_attempter.cc:509] Updating boot flags... May 15 00:57:53.634113 kubelet[2211]: I0515 00:57:53.634088 2211 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 00:57:53.634842 env[1307]: time="2025-05-15T00:57:53.634798414Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 00:57:53.635226 kubelet[2211]: I0515 00:57:53.634976 2211 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 00:57:54.568375 kubelet[2211]: I0515 00:57:54.568325 2211 topology_manager.go:215] "Topology Admit Handler" podUID="3870ea4a-090b-4955-ba4c-0c39e2038e84" podNamespace="kube-system" podName="kube-proxy-9266x" May 15 00:57:54.683839 kubelet[2211]: I0515 00:57:54.683750 2211 topology_manager.go:215] "Topology Admit Handler" podUID="04bf6c36-520e-490d-b06d-d7336c7d72a9" podNamespace="tigera-operator" podName="tigera-operator-797db67f8-2gshm" May 15 00:57:54.766674 kubelet[2211]: I0515 00:57:54.766645 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3870ea4a-090b-4955-ba4c-0c39e2038e84-lib-modules\") pod \"kube-proxy-9266x\" (UID: \"3870ea4a-090b-4955-ba4c-0c39e2038e84\") " pod="kube-system/kube-proxy-9266x" May 15 00:57:54.766837 kubelet[2211]: I0515 00:57:54.766679 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb6jp\" (UniqueName: \"kubernetes.io/projected/3870ea4a-090b-4955-ba4c-0c39e2038e84-kube-api-access-tb6jp\") pod \"kube-proxy-9266x\" (UID: \"3870ea4a-090b-4955-ba4c-0c39e2038e84\") " pod="kube-system/kube-proxy-9266x" May 15 00:57:54.766837 kubelet[2211]: I0515 00:57:54.766783 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3870ea4a-090b-4955-ba4c-0c39e2038e84-kube-proxy\") pod \"kube-proxy-9266x\" (UID: \"3870ea4a-090b-4955-ba4c-0c39e2038e84\") " pod="kube-system/kube-proxy-9266x" May 15 00:57:54.766837 kubelet[2211]: I0515 00:57:54.766829 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3870ea4a-090b-4955-ba4c-0c39e2038e84-xtables-lock\") pod \"kube-proxy-9266x\" (UID: \"3870ea4a-090b-4955-ba4c-0c39e2038e84\") " pod="kube-system/kube-proxy-9266x" May 15 00:57:54.867694 kubelet[2211]: I0515 00:57:54.867572 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6dnx\" (UniqueName: \"kubernetes.io/projected/04bf6c36-520e-490d-b06d-d7336c7d72a9-kube-api-access-g6dnx\") pod \"tigera-operator-797db67f8-2gshm\" (UID: \"04bf6c36-520e-490d-b06d-d7336c7d72a9\") " pod="tigera-operator/tigera-operator-797db67f8-2gshm" May 15 00:57:54.867694 kubelet[2211]: I0515 00:57:54.867627 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/04bf6c36-520e-490d-b06d-d7336c7d72a9-var-lib-calico\") pod \"tigera-operator-797db67f8-2gshm\" (UID: \"04bf6c36-520e-490d-b06d-d7336c7d72a9\") " pod="tigera-operator/tigera-operator-797db67f8-2gshm" May 15 00:57:54.986363 env[1307]: time="2025-05-15T00:57:54.986313852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2gshm,Uid:04bf6c36-520e-490d-b06d-d7336c7d72a9,Namespace:tigera-operator,Attempt:0,}" May 15 00:57:55.051376 env[1307]: time="2025-05-15T00:57:55.051291594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:57:55.051376 env[1307]: time="2025-05-15T00:57:55.051333905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:57:55.051376 env[1307]: time="2025-05-15T00:57:55.051346876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:57:55.051596 env[1307]: time="2025-05-15T00:57:55.051497364Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9a98944ae1c19d104a0e8e85fd597da3099ff6894ebea5326f377fe6fd3bf8a3 pid=2323 runtime=io.containerd.runc.v2 May 15 00:57:55.098867 env[1307]: time="2025-05-15T00:57:55.098812262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-797db67f8-2gshm,Uid:04bf6c36-520e-490d-b06d-d7336c7d72a9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9a98944ae1c19d104a0e8e85fd597da3099ff6894ebea5326f377fe6fd3bf8a3\"" May 15 00:57:55.101439 env[1307]: time="2025-05-15T00:57:55.101404326Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\"" May 15 00:57:55.170835 kubelet[2211]: E0515 00:57:55.170801 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:55.171244 env[1307]: time="2025-05-15T00:57:55.171205153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9266x,Uid:3870ea4a-090b-4955-ba4c-0c39e2038e84,Namespace:kube-system,Attempt:0,}" May 15 00:57:55.217096 env[1307]: time="2025-05-15T00:57:55.217020246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:57:55.217096 env[1307]: time="2025-05-15T00:57:55.217064601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:57:55.217096 env[1307]: time="2025-05-15T00:57:55.217077302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:57:55.217361 env[1307]: time="2025-05-15T00:57:55.217264487Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c82454fca4fc844b1ef08f8980dbbc71f78763bf648a39fe040a368b9a35962 pid=2364 runtime=io.containerd.runc.v2 May 15 00:57:55.247863 env[1307]: time="2025-05-15T00:57:55.247803294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9266x,Uid:3870ea4a-090b-4955-ba4c-0c39e2038e84,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c82454fca4fc844b1ef08f8980dbbc71f78763bf648a39fe040a368b9a35962\"" May 15 00:57:55.248498 kubelet[2211]: E0515 00:57:55.248466 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:55.250368 env[1307]: time="2025-05-15T00:57:55.250331916Z" level=info msg="CreateContainer within sandbox \"7c82454fca4fc844b1ef08f8980dbbc71f78763bf648a39fe040a368b9a35962\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 00:57:55.268013 env[1307]: time="2025-05-15T00:57:55.267927116Z" level=info msg="CreateContainer within sandbox \"7c82454fca4fc844b1ef08f8980dbbc71f78763bf648a39fe040a368b9a35962\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4ecff1a3a91e7aabae331056b49b775c0958925e4fdb624fd3f78ee2d7b87ce9\"" May 15 00:57:55.268572 env[1307]: time="2025-05-15T00:57:55.268535602Z" level=info msg="StartContainer for \"4ecff1a3a91e7aabae331056b49b775c0958925e4fdb624fd3f78ee2d7b87ce9\"" May 15 00:57:55.314443 env[1307]: time="2025-05-15T00:57:55.314393386Z" level=info msg="StartContainer for \"4ecff1a3a91e7aabae331056b49b775c0958925e4fdb624fd3f78ee2d7b87ce9\" returns successfully" May 15 00:57:55.382002 kernel: kauditd_printk_skb: 9 callbacks suppressed May 15 00:57:55.382150 kernel: audit: type=1325 audit(1747270675.376:233): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.382176 kernel: audit: type=1300 audit(1747270675.376:233): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb3f12a00 a2=0 a3=7ffdb3f129ec items=0 ppid=2416 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.376000 audit[2459]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2459 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.376000 audit[2459]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb3f12a00 a2=0 a3=7ffdb3f129ec items=0 ppid=2416 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 00:57:55.389787 kernel: audit: type=1327 audit(1747270675.376:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 00:57:55.389831 kernel: audit: type=1325 audit(1747270675.376:234): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.376000 audit[2460]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.392177 kernel: audit: type=1300 audit(1747270675.376:234): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfb5165a0 a2=0 a3=7ffdfb51658c items=0 ppid=2416 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.376000 audit[2460]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfb5165a0 a2=0 a3=7ffdfb51658c items=0 ppid=2416 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 00:57:55.399788 kernel: audit: type=1327 audit(1747270675.376:234): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 15 00:57:55.399837 kernel: audit: type=1325 audit(1747270675.380:235): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.380000 audit[2461]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.380000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1c0a1500 a2=0 a3=7ffe1c0a14ec items=0 ppid=2416 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.406470 kernel: audit: type=1300 audit(1747270675.380:235): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe1c0a1500 a2=0 a3=7ffe1c0a14ec items=0 ppid=2416 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.406524 kernel: audit: type=1327 audit(1747270675.380:235): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 00:57:55.380000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 00:57:55.408589 kernel: audit: type=1325 audit(1747270675.382:236): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.382000 audit[2462]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.382000 audit[2462]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc5fef05c0 a2=0 a3=7ffc5fef05ac items=0 ppid=2416 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 15 00:57:55.383000 audit[2463]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.383000 audit[2463]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff63ad92a0 a2=0 a3=7fff63ad928c items=0 ppid=2416 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 15 00:57:55.384000 audit[2464]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.384000 audit[2464]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffc0de6400 a2=0 a3=7fffc0de63ec items=0 ppid=2416 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 15 00:57:55.479000 audit[2465]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.479000 audit[2465]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe6696c960 a2=0 a3=7ffe6696c94c items=0 ppid=2416 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.479000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 15 00:57:55.481000 audit[2467]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2467 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.481000 audit[2467]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffef62d1010 a2=0 a3=7ffef62d0ffc items=0 ppid=2416 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.481000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 15 00:57:55.484000 audit[2470]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.484000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff5b75ced0 a2=0 a3=7fff5b75cebc items=0 ppid=2416 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.484000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 15 00:57:55.485000 audit[2471]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.485000 audit[2471]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef2942970 a2=0 a3=7ffef294295c items=0 ppid=2416 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 15 00:57:55.487000 audit[2473]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.487000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd96333bd0 a2=0 a3=7ffd96333bbc items=0 ppid=2416 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 15 00:57:55.488000 audit[2474]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.488000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc70e96120 a2=0 a3=7ffc70e9610c items=0 ppid=2416 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.488000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 15 00:57:55.491000 audit[2476]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.491000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fffa857db30 a2=0 a3=7fffa857db1c items=0 ppid=2416 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.491000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 15 00:57:55.494000 audit[2479]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.494000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffefc1ca570 a2=0 a3=7ffefc1ca55c items=0 ppid=2416 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.494000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 15 00:57:55.495000 audit[2480]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.495000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7644fbc0 a2=0 a3=7ffc7644fbac items=0 ppid=2416 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.495000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 15 00:57:55.497000 audit[2482]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.497000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd76cdbd20 a2=0 a3=7ffd76cdbd0c items=0 ppid=2416 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 15 00:57:55.498000 audit[2483]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.498000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff172d4890 a2=0 a3=7fff172d487c items=0 ppid=2416 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 15 00:57:55.500000 audit[2485]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.500000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcffc4cac0 a2=0 a3=7ffcffc4caac items=0 ppid=2416 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 00:57:55.503000 audit[2488]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.503000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff92c18dd0 a2=0 a3=7fff92c18dbc items=0 ppid=2416 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 00:57:55.506000 audit[2491]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.506000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe3207d0e0 a2=0 a3=7ffe3207d0cc items=0 ppid=2416 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 15 00:57:55.507000 audit[2492]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.507000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff419e53d0 a2=0 a3=7fff419e53bc items=0 ppid=2416 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 15 00:57:55.508000 audit[2494]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.508000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd9a2249f0 a2=0 a3=7ffd9a2249dc items=0 ppid=2416 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 00:57:55.512000 audit[2497]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.512000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffff8ce6500 a2=0 a3=7ffff8ce64ec items=0 ppid=2416 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 00:57:55.513000 audit[2498]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.513000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff55d53d00 a2=0 a3=7fff55d53cec items=0 ppid=2416 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 15 00:57:55.515000 audit[2500]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 15 00:57:55.515000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff8c502a20 a2=0 a3=7fff8c502a0c items=0 ppid=2416 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 15 00:57:55.532000 audit[2506]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:57:55.532000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd42190850 a2=0 a3=7ffd4219083c items=0 ppid=2416 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.532000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:57:55.541000 audit[2506]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:57:55.541000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd42190850 a2=0 a3=7ffd4219083c items=0 ppid=2416 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:57:55.542000 audit[2510]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.542000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff76237690 a2=0 a3=7fff7623767c items=0 ppid=2416 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.542000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 15 00:57:55.544000 audit[2512]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.544000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff6662a230 a2=0 a3=7fff6662a21c items=0 ppid=2416 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.544000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 15 00:57:55.547000 audit[2515]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.547000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff2f626090 a2=0 a3=7fff2f62607c items=0 ppid=2416 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.547000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 15 00:57:55.548000 audit[2516]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.548000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb4b02d40 a2=0 a3=7ffcb4b02d2c items=0 ppid=2416 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 15 00:57:55.550000 audit[2518]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.550000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffef1a350e0 a2=0 a3=7ffef1a350cc items=0 ppid=2416 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 15 00:57:55.550000 audit[2519]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.550000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9caa7cd0 a2=0 a3=7ffe9caa7cbc items=0 ppid=2416 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 15 00:57:55.552000 audit[2521]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.552000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffcfedfd390 a2=0 a3=7ffcfedfd37c items=0 ppid=2416 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.552000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 15 00:57:55.555000 audit[2524]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.555000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffe7e789b30 a2=0 a3=7ffe7e789b1c items=0 ppid=2416 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 15 00:57:55.556000 audit[2525]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.556000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec43342c0 a2=0 a3=7ffec43342ac items=0 ppid=2416 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 15 00:57:55.558000 audit[2527]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.558000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc80645ed0 a2=0 a3=7ffc80645ebc items=0 ppid=2416 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.558000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 15 00:57:55.559000 audit[2528]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.559000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe99364be0 a2=0 a3=7ffe99364bcc items=0 ppid=2416 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 15 00:57:55.562000 audit[2530]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.562000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff9fd2e410 a2=0 a3=7fff9fd2e3fc items=0 ppid=2416 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 15 00:57:55.565000 audit[2533]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.565000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0e985ec0 a2=0 a3=7ffe0e985eac items=0 ppid=2416 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 15 00:57:55.568000 audit[2536]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.568000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffefd78b9e0 a2=0 a3=7ffefd78b9cc items=0 ppid=2416 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 15 00:57:55.569000 audit[2537]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.569000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcfa0f7a20 a2=0 a3=7ffcfa0f7a0c items=0 ppid=2416 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 15 00:57:55.570000 audit[2539]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.570000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd9ce69120 a2=0 a3=7ffd9ce6910c items=0 ppid=2416 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 00:57:55.573000 audit[2542]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.573000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffddcb8e8e0 a2=0 a3=7ffddcb8e8cc items=0 ppid=2416 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 15 00:57:55.574000 audit[2543]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.574000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf7342380 a2=0 a3=7ffdf734236c items=0 ppid=2416 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 15 00:57:55.576000 audit[2545]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.576000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdc1b8b290 a2=0 a3=7ffdc1b8b27c items=0 ppid=2416 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 15 00:57:55.577000 audit[2546]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.577000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0f023fc0 a2=0 a3=7fff0f023fac items=0 ppid=2416 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 15 00:57:55.578000 audit[2548]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.578000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc57d8c920 a2=0 a3=7ffc57d8c90c items=0 ppid=2416 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 00:57:55.581000 audit[2551]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 15 00:57:55.581000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffca3f5eef0 a2=0 a3=7ffca3f5eedc items=0 ppid=2416 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 15 00:57:55.583000 audit[2553]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 15 00:57:55.583000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffc517cd620 a2=0 a3=7ffc517cd60c items=0 ppid=2416 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.583000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:57:55.584000 audit[2553]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 15 00:57:55.584000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffc517cd620 a2=0 a3=7ffc517cd60c items=0 ppid=2416 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:57:55.584000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:57:56.014703 kubelet[2211]: E0515 00:57:56.014675 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:57:56.021939 kubelet[2211]: I0515 00:57:56.021883 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9266x" podStartSLOduration=2.021861968 podStartE2EDuration="2.021861968s" podCreationTimestamp="2025-05-15 00:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:57:56.021770041 +0000 UTC m=+18.144609132" watchObservedRunningTime="2025-05-15 00:57:56.021861968 +0000 UTC m=+18.144701039" May 15 00:57:56.635698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044703209.mount: Deactivated successfully. May 15 00:57:57.660687 env[1307]: time="2025-05-15T00:57:57.660575510Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:57.697850 env[1307]: time="2025-05-15T00:57:57.697716574Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:57.700071 env[1307]: time="2025-05-15T00:57:57.700017618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:57.702436 env[1307]: time="2025-05-15T00:57:57.702360822Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:a4a44422d8f2a14e0aaea2031ccb5580f2bf68218c9db444450c1888743305e9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:57:57.703180 env[1307]: time="2025-05-15T00:57:57.703140870Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.7\" returns image reference \"sha256:e9b19fa62f476f04e5840eb65a0f71b49c7b9f4ceede31675409ddc218bb5578\"" May 15 00:57:57.705434 env[1307]: time="2025-05-15T00:57:57.705409691Z" level=info msg="CreateContainer within sandbox \"9a98944ae1c19d104a0e8e85fd597da3099ff6894ebea5326f377fe6fd3bf8a3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 15 00:57:57.892342 env[1307]: time="2025-05-15T00:57:57.892284796Z" level=info msg="CreateContainer within sandbox \"9a98944ae1c19d104a0e8e85fd597da3099ff6894ebea5326f377fe6fd3bf8a3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6156f5608163caa12bbe0c271518e4da1ff5ba385be5da70de21fe0b5220ddfd\"" May 15 00:57:57.892778 env[1307]: time="2025-05-15T00:57:57.892730004Z" level=info msg="StartContainer for \"6156f5608163caa12bbe0c271518e4da1ff5ba385be5da70de21fe0b5220ddfd\"" May 15 00:57:57.933549 env[1307]: time="2025-05-15T00:57:57.933410531Z" level=info msg="StartContainer for \"6156f5608163caa12bbe0c271518e4da1ff5ba385be5da70de21fe0b5220ddfd\" returns successfully" May 15 00:58:00.962000 audit[2594]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:00.964126 kernel: kauditd_printk_skb: 143 callbacks suppressed May 15 00:58:00.964180 kernel: audit: type=1325 audit(1747270680.962:284): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:00.962000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe41132430 a2=0 a3=7ffe4113241c items=0 ppid=2416 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:00.971131 kernel: audit: type=1300 audit(1747270680.962:284): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe41132430 a2=0 a3=7ffe4113241c items=0 ppid=2416 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:00.971176 kernel: audit: type=1327 audit(1747270680.962:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:00.962000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:00.974000 audit[2594]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:00.974000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe41132430 a2=0 a3=0 items=0 ppid=2416 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:00.982759 kernel: audit: type=1325 audit(1747270680.974:285): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:00.982803 kernel: audit: type=1300 audit(1747270680.974:285): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe41132430 a2=0 a3=0 items=0 ppid=2416 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:00.982827 kernel: audit: type=1327 audit(1747270680.974:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:00.974000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:00.990000 audit[2596]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:00.990000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff2981d8b0 a2=0 a3=7fff2981d89c items=0 ppid=2416 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:00.999220 kernel: audit: type=1325 audit(1747270680.990:286): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:00.999276 kernel: audit: type=1300 audit(1747270680.990:286): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff2981d8b0 a2=0 a3=7fff2981d89c items=0 ppid=2416 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:00.999305 kernel: audit: type=1327 audit(1747270680.990:286): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:00.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:01.002000 audit[2596]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:01.002000 audit[2596]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff2981d8b0 a2=0 a3=0 items=0 ppid=2416 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:01.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:01.006974 kernel: audit: type=1325 audit(1747270681.002:287): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:01.110737 kubelet[2211]: I0515 00:58:01.110683 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-797db67f8-2gshm" podStartSLOduration=4.507385863 podStartE2EDuration="7.11066276s" podCreationTimestamp="2025-05-15 00:57:54 +0000 UTC" firstStartedPulling="2025-05-15 00:57:55.100737199 +0000 UTC m=+17.223576270" lastFinishedPulling="2025-05-15 00:57:57.704014106 +0000 UTC m=+19.826853167" observedRunningTime="2025-05-15 00:57:58.030996499 +0000 UTC m=+20.153835601" watchObservedRunningTime="2025-05-15 00:58:01.11066276 +0000 UTC m=+23.233501831" May 15 00:58:01.111198 kubelet[2211]: I0515 00:58:01.110810 2211 topology_manager.go:215] "Topology Admit Handler" podUID="fb031ce5-df3d-4fc3-a255-e9b8990f1c9a" podNamespace="calico-system" podName="calico-typha-599bfd58cc-mq9zf" May 15 00:58:01.111789 kubelet[2211]: I0515 00:58:01.111472 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb031ce5-df3d-4fc3-a255-e9b8990f1c9a-tigera-ca-bundle\") pod \"calico-typha-599bfd58cc-mq9zf\" (UID: \"fb031ce5-df3d-4fc3-a255-e9b8990f1c9a\") " pod="calico-system/calico-typha-599bfd58cc-mq9zf" May 15 00:58:01.111789 kubelet[2211]: I0515 00:58:01.111513 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmgv\" (UniqueName: \"kubernetes.io/projected/fb031ce5-df3d-4fc3-a255-e9b8990f1c9a-kube-api-access-cwmgv\") pod \"calico-typha-599bfd58cc-mq9zf\" (UID: \"fb031ce5-df3d-4fc3-a255-e9b8990f1c9a\") " pod="calico-system/calico-typha-599bfd58cc-mq9zf" May 15 00:58:01.111789 kubelet[2211]: I0515 00:58:01.111529 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fb031ce5-df3d-4fc3-a255-e9b8990f1c9a-typha-certs\") pod \"calico-typha-599bfd58cc-mq9zf\" (UID: \"fb031ce5-df3d-4fc3-a255-e9b8990f1c9a\") " pod="calico-system/calico-typha-599bfd58cc-mq9zf" May 15 00:58:01.160917 kubelet[2211]: I0515 00:58:01.160878 2211 topology_manager.go:215] "Topology Admit Handler" podUID="1af1a5f2-4933-4456-b057-97057326582c" podNamespace="calico-system" podName="calico-node-dpnsl" May 15 00:58:01.303308 kubelet[2211]: I0515 00:58:01.303185 2211 topology_manager.go:215] "Topology Admit Handler" podUID="234fff70-d82a-4012-9e49-d23446deada6" podNamespace="calico-system" podName="csi-node-driver-pk5fw" May 15 00:58:01.304038 kubelet[2211]: E0515 00:58:01.303435 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:01.313024 kubelet[2211]: I0515 00:58:01.312975 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-run-calico\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313024 kubelet[2211]: I0515 00:58:01.313012 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-lib-calico\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313024 kubelet[2211]: I0515 00:58:01.313027 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/234fff70-d82a-4012-9e49-d23446deada6-socket-dir\") pod \"csi-node-driver-pk5fw\" (UID: \"234fff70-d82a-4012-9e49-d23446deada6\") " pod="calico-system/csi-node-driver-pk5fw" May 15 00:58:01.313024 kubelet[2211]: I0515 00:58:01.313040 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/234fff70-d82a-4012-9e49-d23446deada6-registration-dir\") pod \"csi-node-driver-pk5fw\" (UID: \"234fff70-d82a-4012-9e49-d23446deada6\") " pod="calico-system/csi-node-driver-pk5fw" May 15 00:58:01.313298 kubelet[2211]: I0515 00:58:01.313058 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-flexvol-driver-host\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313298 kubelet[2211]: I0515 00:58:01.313072 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzxhc\" (UniqueName: \"kubernetes.io/projected/1af1a5f2-4933-4456-b057-97057326582c-kube-api-access-vzxhc\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313298 kubelet[2211]: I0515 00:58:01.313085 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-bin-dir\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313298 kubelet[2211]: I0515 00:58:01.313099 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/234fff70-d82a-4012-9e49-d23446deada6-varrun\") pod \"csi-node-driver-pk5fw\" (UID: \"234fff70-d82a-4012-9e49-d23446deada6\") " pod="calico-system/csi-node-driver-pk5fw" May 15 00:58:01.313298 kubelet[2211]: I0515 00:58:01.313120 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs87q\" (UniqueName: \"kubernetes.io/projected/234fff70-d82a-4012-9e49-d23446deada6-kube-api-access-fs87q\") pod \"csi-node-driver-pk5fw\" (UID: \"234fff70-d82a-4012-9e49-d23446deada6\") " pod="calico-system/csi-node-driver-pk5fw" May 15 00:58:01.313422 kubelet[2211]: I0515 00:58:01.313133 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1af1a5f2-4933-4456-b057-97057326582c-node-certs\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313422 kubelet[2211]: I0515 00:58:01.313154 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-lib-modules\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313422 kubelet[2211]: I0515 00:58:01.313172 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/234fff70-d82a-4012-9e49-d23446deada6-kubelet-dir\") pod \"csi-node-driver-pk5fw\" (UID: \"234fff70-d82a-4012-9e49-d23446deada6\") " pod="calico-system/csi-node-driver-pk5fw" May 15 00:58:01.313422 kubelet[2211]: I0515 00:58:01.313186 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-xtables-lock\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313422 kubelet[2211]: I0515 00:58:01.313201 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-policysync\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313535 kubelet[2211]: I0515 00:58:01.313215 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1af1a5f2-4933-4456-b057-97057326582c-tigera-ca-bundle\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313535 kubelet[2211]: I0515 00:58:01.313230 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-net-dir\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.313535 kubelet[2211]: I0515 00:58:01.313246 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-log-dir\") pod \"calico-node-dpnsl\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " pod="calico-system/calico-node-dpnsl" May 15 00:58:01.413760 kubelet[2211]: E0515 00:58:01.413715 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:01.414373 env[1307]: time="2025-05-15T00:58:01.414335696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599bfd58cc-mq9zf,Uid:fb031ce5-df3d-4fc3-a255-e9b8990f1c9a,Namespace:calico-system,Attempt:0,}" May 15 00:58:01.416918 kubelet[2211]: E0515 00:58:01.416886 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:01.416918 kubelet[2211]: W0515 00:58:01.416915 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:01.417019 kubelet[2211]: E0515 00:58:01.416930 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:01.514334 kubelet[2211]: E0515 00:58:01.514311 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:01.514334 kubelet[2211]: W0515 00:58:01.514331 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:01.514444 kubelet[2211]: E0515 00:58:01.514351 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:01.514515 kubelet[2211]: E0515 00:58:01.514501 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:01.514515 kubelet[2211]: W0515 00:58:01.514511 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:01.514515 kubelet[2211]: E0515 00:58:01.514518 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:01.539377 kubelet[2211]: E0515 00:58:01.539350 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:01.539377 kubelet[2211]: W0515 00:58:01.539368 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:01.539476 kubelet[2211]: E0515 00:58:01.539389 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:01.539610 kubelet[2211]: E0515 00:58:01.539580 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:01.539610 kubelet[2211]: W0515 00:58:01.539601 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:01.539784 kubelet[2211]: E0515 00:58:01.539624 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:01.621207 env[1307]: time="2025-05-15T00:58:01.621049411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:58:01.621207 env[1307]: time="2025-05-15T00:58:01.621094643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:58:01.621207 env[1307]: time="2025-05-15T00:58:01.621122264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:58:01.621545 env[1307]: time="2025-05-15T00:58:01.621445794Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b9745140c0413cf8de0a0ca9e463d105cdb2a9a9282a186ce743d8f1fd4c3e9d pid=2614 runtime=io.containerd.runc.v2 May 15 00:58:01.666225 env[1307]: time="2025-05-15T00:58:01.666155580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-599bfd58cc-mq9zf,Uid:fb031ce5-df3d-4fc3-a255-e9b8990f1c9a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b9745140c0413cf8de0a0ca9e463d105cdb2a9a9282a186ce743d8f1fd4c3e9d\"" May 15 00:58:01.667782 kubelet[2211]: E0515 00:58:01.667758 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:01.670406 env[1307]: time="2025-05-15T00:58:01.670368039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 15 00:58:01.763619 kubelet[2211]: E0515 00:58:01.763581 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:01.764069 env[1307]: time="2025-05-15T00:58:01.764025885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dpnsl,Uid:1af1a5f2-4933-4456-b057-97057326582c,Namespace:calico-system,Attempt:0,}" May 15 00:58:01.779462 env[1307]: time="2025-05-15T00:58:01.779400784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:58:01.779462 env[1307]: time="2025-05-15T00:58:01.779443240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:58:01.779462 env[1307]: time="2025-05-15T00:58:01.779453443Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:58:01.779706 env[1307]: time="2025-05-15T00:58:01.779657943Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66 pid=2653 runtime=io.containerd.runc.v2 May 15 00:58:01.807009 env[1307]: time="2025-05-15T00:58:01.806940413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dpnsl,Uid:1af1a5f2-4933-4456-b057-97057326582c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\"" May 15 00:58:01.807578 kubelet[2211]: E0515 00:58:01.807557 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:02.014000 audit[2687]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:02.014000 audit[2687]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fffea9d1470 a2=0 a3=7fffea9d145c items=0 ppid=2416 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:02.014000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:02.020000 audit[2687]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:02.020000 audit[2687]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffea9d1470 a2=0 a3=0 items=0 ppid=2416 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:02.020000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:02.974069 kubelet[2211]: E0515 00:58:02.974003 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:04.256623 env[1307]: time="2025-05-15T00:58:04.256553613Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:04.259339 env[1307]: time="2025-05-15T00:58:04.259286329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:04.260860 env[1307]: time="2025-05-15T00:58:04.260812671Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:04.262288 env[1307]: time="2025-05-15T00:58:04.262258135Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:04.262846 env[1307]: time="2025-05-15T00:58:04.262804982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:bde24a3cb8851b59372b76b3ad78f8028d1a915ffed82c6cc6256f34e500bd3d\"" May 15 00:58:04.264015 env[1307]: time="2025-05-15T00:58:04.263992977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 15 00:58:04.273411 env[1307]: time="2025-05-15T00:58:04.273366383Z" level=info msg="CreateContainer within sandbox \"b9745140c0413cf8de0a0ca9e463d105cdb2a9a9282a186ce743d8f1fd4c3e9d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 15 00:58:04.301017 env[1307]: time="2025-05-15T00:58:04.300947122Z" level=info msg="CreateContainer within sandbox \"b9745140c0413cf8de0a0ca9e463d105cdb2a9a9282a186ce743d8f1fd4c3e9d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e3e3ef392e83bdbbd2cc6cab6f32bd90eb8cc7dd57930d8f81b1217eefd8568e\"" May 15 00:58:04.301527 env[1307]: time="2025-05-15T00:58:04.301494952Z" level=info msg="StartContainer for \"e3e3ef392e83bdbbd2cc6cab6f32bd90eb8cc7dd57930d8f81b1217eefd8568e\"" May 15 00:58:04.410161 env[1307]: time="2025-05-15T00:58:04.410079997Z" level=info msg="StartContainer for \"e3e3ef392e83bdbbd2cc6cab6f32bd90eb8cc7dd57930d8f81b1217eefd8568e\" returns successfully" May 15 00:58:04.838908 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:57044.service. May 15 00:58:04.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.134:22-10.0.0.1:57044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:04.872000 audit[2750]: USER_ACCT pid=2750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:04.873489 sshd[2750]: Accepted publickey for core from 10.0.0.1 port 57044 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:04.873000 audit[2750]: CRED_ACQ pid=2750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:04.873000 audit[2750]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4812e860 a2=3 a3=0 items=0 ppid=1 pid=2750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:04.873000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:04.874449 sshd[2750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:04.877704 systemd-logind[1293]: New session 8 of user core. May 15 00:58:04.878394 systemd[1]: Started session-8.scope. May 15 00:58:04.881000 audit[2750]: USER_START pid=2750 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:04.882000 audit[2753]: CRED_ACQ pid=2753 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:04.974447 kubelet[2211]: E0515 00:58:04.974389 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:04.983915 sshd[2750]: pam_unix(sshd:session): session closed for user core May 15 00:58:04.983000 audit[2750]: USER_END pid=2750 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:04.984000 audit[2750]: CRED_DISP pid=2750 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:04.986934 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:57044.service: Deactivated successfully. May 15 00:58:04.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.134:22-10.0.0.1:57044 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:04.987808 systemd[1]: session-8.scope: Deactivated successfully. May 15 00:58:04.988185 systemd-logind[1293]: Session 8 logged out. Waiting for processes to exit. May 15 00:58:04.988813 systemd-logind[1293]: Removed session 8. May 15 00:58:05.033383 kubelet[2211]: E0515 00:58:05.033336 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:05.037977 kubelet[2211]: E0515 00:58:05.037921 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.037977 kubelet[2211]: W0515 00:58:05.037940 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.037977 kubelet[2211]: E0515 00:58:05.037964 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.038221 kubelet[2211]: E0515 00:58:05.038097 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.038221 kubelet[2211]: W0515 00:58:05.038104 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.038221 kubelet[2211]: E0515 00:58:05.038111 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.038311 kubelet[2211]: E0515 00:58:05.038238 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.038311 kubelet[2211]: W0515 00:58:05.038243 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.038311 kubelet[2211]: E0515 00:58:05.038249 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.038411 kubelet[2211]: E0515 00:58:05.038368 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.038411 kubelet[2211]: W0515 00:58:05.038373 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.038411 kubelet[2211]: E0515 00:58:05.038380 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.038540 kubelet[2211]: E0515 00:58:05.038522 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.038540 kubelet[2211]: W0515 00:58:05.038531 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.038540 kubelet[2211]: E0515 00:58:05.038538 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.038671 kubelet[2211]: E0515 00:58:05.038656 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.038671 kubelet[2211]: W0515 00:58:05.038663 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.038671 kubelet[2211]: E0515 00:58:05.038671 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.038802 kubelet[2211]: E0515 00:58:05.038787 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.038802 kubelet[2211]: W0515 00:58:05.038795 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.038802 kubelet[2211]: E0515 00:58:05.038800 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.038928 kubelet[2211]: E0515 00:58:05.038918 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.038928 kubelet[2211]: W0515 00:58:05.038926 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.039022 kubelet[2211]: E0515 00:58:05.038932 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.039078 kubelet[2211]: E0515 00:58:05.039068 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.039078 kubelet[2211]: W0515 00:58:05.039075 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.039144 kubelet[2211]: E0515 00:58:05.039082 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.039224 kubelet[2211]: E0515 00:58:05.039202 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.039224 kubelet[2211]: W0515 00:58:05.039207 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.039224 kubelet[2211]: E0515 00:58:05.039215 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.039349 kubelet[2211]: E0515 00:58:05.039338 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.039349 kubelet[2211]: W0515 00:58:05.039346 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.039426 kubelet[2211]: E0515 00:58:05.039352 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.039562 kubelet[2211]: E0515 00:58:05.039538 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.039562 kubelet[2211]: W0515 00:58:05.039552 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.039562 kubelet[2211]: E0515 00:58:05.039566 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.039744 kubelet[2211]: E0515 00:58:05.039707 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.039744 kubelet[2211]: W0515 00:58:05.039715 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.039744 kubelet[2211]: E0515 00:58:05.039724 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.039855 kubelet[2211]: E0515 00:58:05.039839 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.039855 kubelet[2211]: W0515 00:58:05.039850 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.039922 kubelet[2211]: E0515 00:58:05.039859 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.040002 kubelet[2211]: E0515 00:58:05.039987 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.040002 kubelet[2211]: W0515 00:58:05.040000 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.040080 kubelet[2211]: E0515 00:58:05.040011 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.043342 kubelet[2211]: I0515 00:58:05.043301 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-599bfd58cc-mq9zf" podStartSLOduration=1.4495366619999999 podStartE2EDuration="4.043289909s" podCreationTimestamp="2025-05-15 00:58:01 +0000 UTC" firstStartedPulling="2025-05-15 00:58:01.67004952 +0000 UTC m=+23.792888591" lastFinishedPulling="2025-05-15 00:58:04.263802747 +0000 UTC m=+26.386641838" observedRunningTime="2025-05-15 00:58:05.042387869 +0000 UTC m=+27.165226930" watchObservedRunningTime="2025-05-15 00:58:05.043289909 +0000 UTC m=+27.166128980" May 15 00:58:05.137578 kubelet[2211]: E0515 00:58:05.137471 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.137578 kubelet[2211]: W0515 00:58:05.137498 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.137578 kubelet[2211]: E0515 00:58:05.137517 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.137768 kubelet[2211]: E0515 00:58:05.137752 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.137768 kubelet[2211]: W0515 00:58:05.137763 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.137849 kubelet[2211]: E0515 00:58:05.137774 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.138136 kubelet[2211]: E0515 00:58:05.138087 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.138136 kubelet[2211]: W0515 00:58:05.138114 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.138136 kubelet[2211]: E0515 00:58:05.138146 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.138405 kubelet[2211]: E0515 00:58:05.138390 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.138405 kubelet[2211]: W0515 00:58:05.138400 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.138513 kubelet[2211]: E0515 00:58:05.138412 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.138601 kubelet[2211]: E0515 00:58:05.138585 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.138601 kubelet[2211]: W0515 00:58:05.138595 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.138676 kubelet[2211]: E0515 00:58:05.138618 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.138838 kubelet[2211]: E0515 00:58:05.138803 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.138838 kubelet[2211]: W0515 00:58:05.138826 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.138927 kubelet[2211]: E0515 00:58:05.138862 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.139056 kubelet[2211]: E0515 00:58:05.139033 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.139056 kubelet[2211]: W0515 00:58:05.139045 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.139224 kubelet[2211]: E0515 00:58:05.139150 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.139297 kubelet[2211]: E0515 00:58:05.139279 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.139297 kubelet[2211]: W0515 00:58:05.139293 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.139396 kubelet[2211]: E0515 00:58:05.139355 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.139533 kubelet[2211]: E0515 00:58:05.139517 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.139533 kubelet[2211]: W0515 00:58:05.139528 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.139533 kubelet[2211]: E0515 00:58:05.139540 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.139834 kubelet[2211]: E0515 00:58:05.139803 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.139834 kubelet[2211]: W0515 00:58:05.139814 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.139834 kubelet[2211]: E0515 00:58:05.139827 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.139999 kubelet[2211]: E0515 00:58:05.139986 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.139999 kubelet[2211]: W0515 00:58:05.139995 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.140068 kubelet[2211]: E0515 00:58:05.140007 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.140210 kubelet[2211]: E0515 00:58:05.140188 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.140210 kubelet[2211]: W0515 00:58:05.140202 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.140270 kubelet[2211]: E0515 00:58:05.140217 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.140373 kubelet[2211]: E0515 00:58:05.140365 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.140397 kubelet[2211]: W0515 00:58:05.140373 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.140397 kubelet[2211]: E0515 00:58:05.140384 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.140575 kubelet[2211]: E0515 00:58:05.140559 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.140575 kubelet[2211]: W0515 00:58:05.140572 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.140653 kubelet[2211]: E0515 00:58:05.140586 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.140825 kubelet[2211]: E0515 00:58:05.140808 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.140825 kubelet[2211]: W0515 00:58:05.140820 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.140910 kubelet[2211]: E0515 00:58:05.140833 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.141004 kubelet[2211]: E0515 00:58:05.140989 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.141004 kubelet[2211]: W0515 00:58:05.141000 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.141079 kubelet[2211]: E0515 00:58:05.141012 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.141173 kubelet[2211]: E0515 00:58:05.141159 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.141173 kubelet[2211]: W0515 00:58:05.141170 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.141221 kubelet[2211]: E0515 00:58:05.141181 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:05.141344 kubelet[2211]: E0515 00:58:05.141333 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:05.141369 kubelet[2211]: W0515 00:58:05.141344 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:05.141369 kubelet[2211]: E0515 00:58:05.141351 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.034101 kubelet[2211]: I0515 00:58:06.034059 2211 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 00:58:06.034604 kubelet[2211]: E0515 00:58:06.034573 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:06.047275 kubelet[2211]: E0515 00:58:06.047238 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.047275 kubelet[2211]: W0515 00:58:06.047263 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.047435 kubelet[2211]: E0515 00:58:06.047286 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.047497 kubelet[2211]: E0515 00:58:06.047486 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.047497 kubelet[2211]: W0515 00:58:06.047495 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.047545 kubelet[2211]: E0515 00:58:06.047501 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.047748 kubelet[2211]: E0515 00:58:06.047731 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.047748 kubelet[2211]: W0515 00:58:06.047741 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.047801 kubelet[2211]: E0515 00:58:06.047748 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.047917 kubelet[2211]: E0515 00:58:06.047906 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.047917 kubelet[2211]: W0515 00:58:06.047914 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.048015 kubelet[2211]: E0515 00:58:06.047920 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.048093 kubelet[2211]: E0515 00:58:06.048084 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.048093 kubelet[2211]: W0515 00:58:06.048091 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.048144 kubelet[2211]: E0515 00:58:06.048098 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.048233 kubelet[2211]: E0515 00:58:06.048226 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.048260 kubelet[2211]: W0515 00:58:06.048233 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.048260 kubelet[2211]: E0515 00:58:06.048240 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.048381 kubelet[2211]: E0515 00:58:06.048373 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.048406 kubelet[2211]: W0515 00:58:06.048381 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.048406 kubelet[2211]: E0515 00:58:06.048387 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.048529 kubelet[2211]: E0515 00:58:06.048521 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.048554 kubelet[2211]: W0515 00:58:06.048529 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.048554 kubelet[2211]: E0515 00:58:06.048535 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.048708 kubelet[2211]: E0515 00:58:06.048689 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.048738 kubelet[2211]: W0515 00:58:06.048710 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.048738 kubelet[2211]: E0515 00:58:06.048721 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.048879 kubelet[2211]: E0515 00:58:06.048867 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.048879 kubelet[2211]: W0515 00:58:06.048876 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.048946 kubelet[2211]: E0515 00:58:06.048883 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.049045 kubelet[2211]: E0515 00:58:06.049035 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.049045 kubelet[2211]: W0515 00:58:06.049043 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.049097 kubelet[2211]: E0515 00:58:06.049049 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.049196 kubelet[2211]: E0515 00:58:06.049187 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.049222 kubelet[2211]: W0515 00:58:06.049198 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.049222 kubelet[2211]: E0515 00:58:06.049205 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.049349 kubelet[2211]: E0515 00:58:06.049342 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.049375 kubelet[2211]: W0515 00:58:06.049349 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.049375 kubelet[2211]: E0515 00:58:06.049356 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.049494 kubelet[2211]: E0515 00:58:06.049487 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.049521 kubelet[2211]: W0515 00:58:06.049494 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.049521 kubelet[2211]: E0515 00:58:06.049500 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.049653 kubelet[2211]: E0515 00:58:06.049644 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.049678 kubelet[2211]: W0515 00:58:06.049655 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.049678 kubelet[2211]: E0515 00:58:06.049664 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.070791 env[1307]: time="2025-05-15T00:58:06.070734962Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:06.072792 env[1307]: time="2025-05-15T00:58:06.072763811Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:06.074435 env[1307]: time="2025-05-15T00:58:06.074397528Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:06.075765 env[1307]: time="2025-05-15T00:58:06.075703950Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:06.076079 env[1307]: time="2025-05-15T00:58:06.076041196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" May 15 00:58:06.078308 env[1307]: time="2025-05-15T00:58:06.078274040Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 00:58:06.093110 env[1307]: time="2025-05-15T00:58:06.093061013Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4\"" May 15 00:58:06.093591 env[1307]: time="2025-05-15T00:58:06.093423484Z" level=info msg="StartContainer for \"662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4\"" May 15 00:58:06.145212 kubelet[2211]: E0515 00:58:06.145171 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.145212 kubelet[2211]: W0515 00:58:06.145196 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.145212 kubelet[2211]: E0515 00:58:06.145214 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.145552 kubelet[2211]: E0515 00:58:06.145482 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.145552 kubelet[2211]: W0515 00:58:06.145501 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.145552 kubelet[2211]: E0515 00:58:06.145526 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.145722 kubelet[2211]: E0515 00:58:06.145703 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.145722 kubelet[2211]: W0515 00:58:06.145717 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.145818 kubelet[2211]: E0515 00:58:06.145740 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.146053 kubelet[2211]: E0515 00:58:06.146018 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.146053 kubelet[2211]: W0515 00:58:06.146047 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.146223 kubelet[2211]: E0515 00:58:06.146086 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.146316 kubelet[2211]: E0515 00:58:06.146300 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.146316 kubelet[2211]: W0515 00:58:06.146313 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.146399 kubelet[2211]: E0515 00:58:06.146324 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.147499 kubelet[2211]: E0515 00:58:06.147480 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.147499 kubelet[2211]: W0515 00:58:06.147496 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.147590 kubelet[2211]: E0515 00:58:06.147528 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.147693 kubelet[2211]: E0515 00:58:06.147667 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.147693 kubelet[2211]: W0515 00:58:06.147683 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.147788 kubelet[2211]: E0515 00:58:06.147779 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.147855 kubelet[2211]: E0515 00:58:06.147836 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.147855 kubelet[2211]: W0515 00:58:06.147851 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.147937 kubelet[2211]: E0515 00:58:06.147869 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.148093 kubelet[2211]: E0515 00:58:06.148078 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.148093 kubelet[2211]: W0515 00:58:06.148090 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.148182 kubelet[2211]: E0515 00:58:06.148103 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.148308 kubelet[2211]: E0515 00:58:06.148290 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.148308 kubelet[2211]: W0515 00:58:06.148305 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.148407 kubelet[2211]: E0515 00:58:06.148325 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.148562 kubelet[2211]: E0515 00:58:06.148540 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.148562 kubelet[2211]: W0515 00:58:06.148557 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.148671 env[1307]: time="2025-05-15T00:58:06.148526011Z" level=info msg="StartContainer for \"662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4\" returns successfully" May 15 00:58:06.148735 kubelet[2211]: E0515 00:58:06.148579 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.148851 kubelet[2211]: E0515 00:58:06.148831 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.148851 kubelet[2211]: W0515 00:58:06.148844 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.148941 kubelet[2211]: E0515 00:58:06.148858 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.149404 kubelet[2211]: E0515 00:58:06.149383 2211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 15 00:58:06.149404 kubelet[2211]: W0515 00:58:06.149403 2211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 15 00:58:06.150910 kubelet[2211]: E0515 00:58:06.149413 2211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 15 00:58:06.186564 env[1307]: time="2025-05-15T00:58:06.186490673Z" level=info msg="shim disconnected" id=662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4 May 15 00:58:06.186564 env[1307]: time="2025-05-15T00:58:06.186544319Z" level=warning msg="cleaning up after shim disconnected" id=662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4 namespace=k8s.io May 15 00:58:06.186564 env[1307]: time="2025-05-15T00:58:06.186553039Z" level=info msg="cleaning up dead shim" May 15 00:58:06.192622 env[1307]: time="2025-05-15T00:58:06.192585288Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:58:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2902 runtime=io.containerd.runc.v2\n" May 15 00:58:06.268404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4-rootfs.mount: Deactivated successfully. May 15 00:58:06.796000 audit[2920]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:06.798400 kernel: kauditd_printk_skb: 19 callbacks suppressed May 15 00:58:06.798454 kernel: audit: type=1325 audit(1747270686.796:299): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:06.796000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffcc974600 a2=0 a3=7fffcc9745ec items=0 ppid=2416 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:06.807037 kernel: audit: type=1300 audit(1747270686.796:299): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffcc974600 a2=0 a3=7fffcc9745ec items=0 ppid=2416 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:06.807082 kernel: audit: type=1327 audit(1747270686.796:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:06.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:06.811000 audit[2920]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:06.811000 audit[2920]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffcc974600 a2=0 a3=7fffcc9745ec items=0 ppid=2416 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:06.819510 kernel: audit: type=1325 audit(1747270686.811:300): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2920 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:58:06.819549 kernel: audit: type=1300 audit(1747270686.811:300): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fffcc974600 a2=0 a3=7fffcc9745ec items=0 ppid=2416 pid=2920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:06.819571 kernel: audit: type=1327 audit(1747270686.811:300): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:06.811000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:58:06.974310 kubelet[2211]: E0515 00:58:06.974254 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:07.037032 kubelet[2211]: E0515 00:58:07.037001 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:07.037833 kubelet[2211]: E0515 00:58:07.037790 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:07.038632 env[1307]: time="2025-05-15T00:58:07.038588213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 15 00:58:08.038462 kubelet[2211]: E0515 00:58:08.038422 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:08.974332 kubelet[2211]: E0515 00:58:08.974263 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:09.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.134:22-10.0.0.1:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:09.988729 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:35092.service. May 15 00:58:09.996543 kernel: audit: type=1130 audit(1747270689.987:301): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.134:22-10.0.0.1:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:10.028000 audit[2926]: USER_ACCT pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.029896 sshd[2926]: Accepted publickey for core from 10.0.0.1 port 35092 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:10.032000 audit[2926]: CRED_ACQ pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.033916 sshd[2926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:10.039548 kernel: audit: type=1101 audit(1747270690.028:302): pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.039604 kernel: audit: type=1103 audit(1747270690.032:303): pid=2926 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.039625 kernel: audit: type=1006 audit(1747270690.032:304): pid=2926 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 May 15 00:58:10.032000 audit[2926]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff11c691c0 a2=3 a3=0 items=0 ppid=1 pid=2926 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:10.032000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:10.038472 systemd-logind[1293]: New session 9 of user core. May 15 00:58:10.038513 systemd[1]: Started session-9.scope. May 15 00:58:10.043000 audit[2926]: USER_START pid=2926 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.044000 audit[2929]: CRED_ACQ pid=2929 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.204121 sshd[2926]: pam_unix(sshd:session): session closed for user core May 15 00:58:10.204000 audit[2926]: USER_END pid=2926 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.204000 audit[2926]: CRED_DISP pid=2926 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:10.206000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.134:22-10.0.0.1:35092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:10.207116 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:35092.service: Deactivated successfully. May 15 00:58:10.207784 systemd[1]: session-9.scope: Deactivated successfully. May 15 00:58:10.208236 systemd-logind[1293]: Session 9 logged out. Waiting for processes to exit. May 15 00:58:10.208861 systemd-logind[1293]: Removed session 9. May 15 00:58:10.975082 kubelet[2211]: E0515 00:58:10.975039 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:11.356017 env[1307]: time="2025-05-15T00:58:11.355913850Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:11.357759 env[1307]: time="2025-05-15T00:58:11.357730752Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:11.359117 env[1307]: time="2025-05-15T00:58:11.359089468Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:11.360473 env[1307]: time="2025-05-15T00:58:11.360436499Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:11.360817 env[1307]: time="2025-05-15T00:58:11.360792758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" May 15 00:58:11.362422 env[1307]: time="2025-05-15T00:58:11.362386735Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:58:11.378366 env[1307]: time="2025-05-15T00:58:11.378319681Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a\"" May 15 00:58:11.378753 env[1307]: time="2025-05-15T00:58:11.378732902Z" level=info msg="StartContainer for \"4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a\"" May 15 00:58:11.418147 env[1307]: time="2025-05-15T00:58:11.418104439Z" level=info msg="StartContainer for \"4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a\" returns successfully" May 15 00:58:12.046395 kubelet[2211]: E0515 00:58:12.046365 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:12.777668 kubelet[2211]: I0515 00:58:12.777531 2211 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 15 00:58:12.781369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a-rootfs.mount: Deactivated successfully. May 15 00:58:12.786262 env[1307]: time="2025-05-15T00:58:12.784322131Z" level=info msg="shim disconnected" id=4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a May 15 00:58:12.786262 env[1307]: time="2025-05-15T00:58:12.784367437Z" level=warning msg="cleaning up after shim disconnected" id=4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a namespace=k8s.io May 15 00:58:12.786262 env[1307]: time="2025-05-15T00:58:12.784377688Z" level=info msg="cleaning up dead shim" May 15 00:58:12.791911 env[1307]: time="2025-05-15T00:58:12.791872236Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:58:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2989 runtime=io.containerd.runc.v2\n" May 15 00:58:12.798945 kubelet[2211]: I0515 00:58:12.798903 2211 topology_manager.go:215] "Topology Admit Handler" podUID="04816f63-0644-4a43-8b7e-41868b6f8780" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lg945" May 15 00:58:12.802133 kubelet[2211]: I0515 00:58:12.801033 2211 topology_manager.go:215] "Topology Admit Handler" podUID="5098666b-a231-44ec-9bf5-415e006ee772" podNamespace="calico-system" podName="calico-kube-controllers-c78b9db48-dl2b8" May 15 00:58:12.802133 kubelet[2211]: I0515 00:58:12.801247 2211 topology_manager.go:215] "Topology Admit Handler" podUID="a8b9021c-44c0-4a1b-b21d-74304d9a9ec9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-2cxpm" May 15 00:58:12.802133 kubelet[2211]: I0515 00:58:12.801861 2211 topology_manager.go:215] "Topology Admit Handler" podUID="88d3eb5f-c3af-435c-afdd-38692e59dcc7" podNamespace="calico-apiserver" podName="calico-apiserver-67fbb64cb9-tnhq7" May 15 00:58:12.802975 kubelet[2211]: I0515 00:58:12.802926 2211 topology_manager.go:215] "Topology Admit Handler" podUID="c73ac129-52ad-46f3-b7aa-1b4346bf3d86" podNamespace="calico-apiserver" podName="calico-apiserver-67fbb64cb9-vxtzh" May 15 00:58:12.977415 env[1307]: time="2025-05-15T00:58:12.977368951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk5fw,Uid:234fff70-d82a-4012-9e49-d23446deada6,Namespace:calico-system,Attempt:0,}" May 15 00:58:12.991506 kubelet[2211]: I0515 00:58:12.991467 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59llx\" (UniqueName: \"kubernetes.io/projected/a8b9021c-44c0-4a1b-b21d-74304d9a9ec9-kube-api-access-59llx\") pod \"coredns-7db6d8ff4d-2cxpm\" (UID: \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\") " pod="kube-system/coredns-7db6d8ff4d-2cxpm" May 15 00:58:12.991677 kubelet[2211]: I0515 00:58:12.991656 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c73ac129-52ad-46f3-b7aa-1b4346bf3d86-calico-apiserver-certs\") pod \"calico-apiserver-67fbb64cb9-vxtzh\" (UID: \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\") " pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" May 15 00:58:12.991773 kubelet[2211]: I0515 00:58:12.991754 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgll\" (UniqueName: \"kubernetes.io/projected/c73ac129-52ad-46f3-b7aa-1b4346bf3d86-kube-api-access-rqgll\") pod \"calico-apiserver-67fbb64cb9-vxtzh\" (UID: \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\") " pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" May 15 00:58:12.991842 kubelet[2211]: I0515 00:58:12.991782 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/04816f63-0644-4a43-8b7e-41868b6f8780-config-volume\") pod \"coredns-7db6d8ff4d-lg945\" (UID: \"04816f63-0644-4a43-8b7e-41868b6f8780\") " pod="kube-system/coredns-7db6d8ff4d-lg945" May 15 00:58:12.991842 kubelet[2211]: I0515 00:58:12.991798 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5kx\" (UniqueName: \"kubernetes.io/projected/04816f63-0644-4a43-8b7e-41868b6f8780-kube-api-access-ql5kx\") pod \"coredns-7db6d8ff4d-lg945\" (UID: \"04816f63-0644-4a43-8b7e-41868b6f8780\") " pod="kube-system/coredns-7db6d8ff4d-lg945" May 15 00:58:12.991842 kubelet[2211]: I0515 00:58:12.991813 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/88d3eb5f-c3af-435c-afdd-38692e59dcc7-calico-apiserver-certs\") pod \"calico-apiserver-67fbb64cb9-tnhq7\" (UID: \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\") " pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" May 15 00:58:12.991842 kubelet[2211]: I0515 00:58:12.991827 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxd9\" (UniqueName: \"kubernetes.io/projected/88d3eb5f-c3af-435c-afdd-38692e59dcc7-kube-api-access-nmxd9\") pod \"calico-apiserver-67fbb64cb9-tnhq7\" (UID: \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\") " pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" May 15 00:58:12.991842 kubelet[2211]: I0515 00:58:12.991840 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5098666b-a231-44ec-9bf5-415e006ee772-tigera-ca-bundle\") pod \"calico-kube-controllers-c78b9db48-dl2b8\" (UID: \"5098666b-a231-44ec-9bf5-415e006ee772\") " pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" May 15 00:58:12.991992 kubelet[2211]: I0515 00:58:12.991854 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8b9021c-44c0-4a1b-b21d-74304d9a9ec9-config-volume\") pod \"coredns-7db6d8ff4d-2cxpm\" (UID: \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\") " pod="kube-system/coredns-7db6d8ff4d-2cxpm" May 15 00:58:12.991992 kubelet[2211]: I0515 00:58:12.991868 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhhqr\" (UniqueName: \"kubernetes.io/projected/5098666b-a231-44ec-9bf5-415e006ee772-kube-api-access-hhhqr\") pod \"calico-kube-controllers-c78b9db48-dl2b8\" (UID: \"5098666b-a231-44ec-9bf5-415e006ee772\") " pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" May 15 00:58:13.034259 env[1307]: time="2025-05-15T00:58:13.034125740Z" level=error msg="Failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.034529 env[1307]: time="2025-05-15T00:58:13.034438551Z" level=error msg="encountered an error cleaning up failed sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.034529 env[1307]: time="2025-05-15T00:58:13.034479177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk5fw,Uid:234fff70-d82a-4012-9e49-d23446deada6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.034754 kubelet[2211]: E0515 00:58:13.034697 2211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.034828 kubelet[2211]: E0515 00:58:13.034770 2211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk5fw" May 15 00:58:13.034828 kubelet[2211]: E0515 00:58:13.034793 2211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pk5fw" May 15 00:58:13.034886 kubelet[2211]: E0515 00:58:13.034829 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pk5fw_calico-system(234fff70-d82a-4012-9e49-d23446deada6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pk5fw_calico-system(234fff70-d82a-4012-9e49-d23446deada6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:13.037234 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b-shm.mount: Deactivated successfully. May 15 00:58:13.049268 kubelet[2211]: E0515 00:58:13.049244 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:13.050339 kubelet[2211]: I0515 00:58:13.049966 2211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:58:13.050469 env[1307]: time="2025-05-15T00:58:13.050077833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 15 00:58:13.050541 env[1307]: time="2025-05-15T00:58:13.050506549Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:58:13.075562 env[1307]: time="2025-05-15T00:58:13.075472594Z" level=error msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" failed" error="failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.075825 kubelet[2211]: E0515 00:58:13.075780 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:58:13.075895 kubelet[2211]: E0515 00:58:13.075848 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b"} May 15 00:58:13.075924 kubelet[2211]: E0515 00:58:13.075911 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:13.076014 kubelet[2211]: E0515 00:58:13.075934 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:13.105667 env[1307]: time="2025-05-15T00:58:13.105626279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c78b9db48-dl2b8,Uid:5098666b-a231-44ec-9bf5-415e006ee772,Namespace:calico-system,Attempt:0,}" May 15 00:58:13.108205 kubelet[2211]: E0515 00:58:13.108159 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:13.109402 env[1307]: time="2025-05-15T00:58:13.109355394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2cxpm,Uid:a8b9021c-44c0-4a1b-b21d-74304d9a9ec9,Namespace:kube-system,Attempt:0,}" May 15 00:58:13.110187 env[1307]: time="2025-05-15T00:58:13.110159365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-vxtzh,Uid:c73ac129-52ad-46f3-b7aa-1b4346bf3d86,Namespace:calico-apiserver,Attempt:0,}" May 15 00:58:13.110929 env[1307]: time="2025-05-15T00:58:13.110162391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-tnhq7,Uid:88d3eb5f-c3af-435c-afdd-38692e59dcc7,Namespace:calico-apiserver,Attempt:0,}" May 15 00:58:13.190339 env[1307]: time="2025-05-15T00:58:13.190270191Z" level=error msg="Failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.190672 env[1307]: time="2025-05-15T00:58:13.190634301Z" level=error msg="encountered an error cleaning up failed sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.190897 env[1307]: time="2025-05-15T00:58:13.190852462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-tnhq7,Uid:88d3eb5f-c3af-435c-afdd-38692e59dcc7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.191370 kubelet[2211]: E0515 00:58:13.191071 2211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.191370 kubelet[2211]: E0515 00:58:13.191127 2211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" May 15 00:58:13.191370 kubelet[2211]: E0515 00:58:13.191146 2211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" May 15 00:58:13.191492 kubelet[2211]: E0515 00:58:13.191184 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fbb64cb9-tnhq7_calico-apiserver(88d3eb5f-c3af-435c-afdd-38692e59dcc7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fbb64cb9-tnhq7_calico-apiserver(88d3eb5f-c3af-435c-afdd-38692e59dcc7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" podUID="88d3eb5f-c3af-435c-afdd-38692e59dcc7" May 15 00:58:13.208035 env[1307]: time="2025-05-15T00:58:13.207948906Z" level=error msg="Failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.208355 env[1307]: time="2025-05-15T00:58:13.208319269Z" level=error msg="encountered an error cleaning up failed sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.208421 env[1307]: time="2025-05-15T00:58:13.208365046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c78b9db48-dl2b8,Uid:5098666b-a231-44ec-9bf5-415e006ee772,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.209321 kubelet[2211]: E0515 00:58:13.208589 2211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.209321 kubelet[2211]: E0515 00:58:13.208632 2211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" May 15 00:58:13.209321 kubelet[2211]: E0515 00:58:13.208649 2211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" May 15 00:58:13.209468 kubelet[2211]: E0515 00:58:13.208680 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c78b9db48-dl2b8_calico-system(5098666b-a231-44ec-9bf5-415e006ee772)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c78b9db48-dl2b8_calico-system(5098666b-a231-44ec-9bf5-415e006ee772)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" podUID="5098666b-a231-44ec-9bf5-415e006ee772" May 15 00:58:13.219695 env[1307]: time="2025-05-15T00:58:13.219633985Z" level=error msg="Failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.220039 env[1307]: time="2025-05-15T00:58:13.219998365Z" level=error msg="encountered an error cleaning up failed sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.220236 env[1307]: time="2025-05-15T00:58:13.220047219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2cxpm,Uid:a8b9021c-44c0-4a1b-b21d-74304d9a9ec9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.220284 kubelet[2211]: E0515 00:58:13.220209 2211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.220284 kubelet[2211]: E0515 00:58:13.220243 2211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2cxpm" May 15 00:58:13.220284 kubelet[2211]: E0515 00:58:13.220264 2211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-2cxpm" May 15 00:58:13.220395 kubelet[2211]: E0515 00:58:13.220296 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-2cxpm_kube-system(a8b9021c-44c0-4a1b-b21d-74304d9a9ec9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-2cxpm_kube-system(a8b9021c-44c0-4a1b-b21d-74304d9a9ec9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2cxpm" podUID="a8b9021c-44c0-4a1b-b21d-74304d9a9ec9" May 15 00:58:13.225951 env[1307]: time="2025-05-15T00:58:13.225899114Z" level=error msg="Failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.226254 env[1307]: time="2025-05-15T00:58:13.226218489Z" level=error msg="encountered an error cleaning up failed sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.226289 env[1307]: time="2025-05-15T00:58:13.226265819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-vxtzh,Uid:c73ac129-52ad-46f3-b7aa-1b4346bf3d86,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.226448 kubelet[2211]: E0515 00:58:13.226414 2211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.226448 kubelet[2211]: E0515 00:58:13.226447 2211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" May 15 00:58:13.226448 kubelet[2211]: E0515 00:58:13.226463 2211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" May 15 00:58:13.226612 kubelet[2211]: E0515 00:58:13.226490 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67fbb64cb9-vxtzh_calico-apiserver(c73ac129-52ad-46f3-b7aa-1b4346bf3d86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67fbb64cb9-vxtzh_calico-apiserver(c73ac129-52ad-46f3-b7aa-1b4346bf3d86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" podUID="c73ac129-52ad-46f3-b7aa-1b4346bf3d86" May 15 00:58:13.402797 kubelet[2211]: E0515 00:58:13.401447 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:13.402934 env[1307]: time="2025-05-15T00:58:13.402308960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lg945,Uid:04816f63-0644-4a43-8b7e-41868b6f8780,Namespace:kube-system,Attempt:0,}" May 15 00:58:13.455108 env[1307]: time="2025-05-15T00:58:13.455040788Z" level=error msg="Failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.455412 env[1307]: time="2025-05-15T00:58:13.455375697Z" level=error msg="encountered an error cleaning up failed sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.455452 env[1307]: time="2025-05-15T00:58:13.455422737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lg945,Uid:04816f63-0644-4a43-8b7e-41868b6f8780,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.455670 kubelet[2211]: E0515 00:58:13.455628 2211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:13.455751 kubelet[2211]: E0515 00:58:13.455685 2211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lg945" May 15 00:58:13.455751 kubelet[2211]: E0515 00:58:13.455703 2211 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lg945" May 15 00:58:13.455810 kubelet[2211]: E0515 00:58:13.455745 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lg945_kube-system(04816f63-0644-4a43-8b7e-41868b6f8780)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lg945_kube-system(04816f63-0644-4a43-8b7e-41868b6f8780)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lg945" podUID="04816f63-0644-4a43-8b7e-41868b6f8780" May 15 00:58:14.052101 kubelet[2211]: I0515 00:58:14.052073 2211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:58:14.052585 env[1307]: time="2025-05-15T00:58:14.052556019Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:58:14.053120 kubelet[2211]: I0515 00:58:14.053097 2211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:58:14.053596 env[1307]: time="2025-05-15T00:58:14.053566057Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:58:14.054940 kubelet[2211]: I0515 00:58:14.054607 2211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:58:14.055226 env[1307]: time="2025-05-15T00:58:14.055196993Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:58:14.056930 kubelet[2211]: I0515 00:58:14.056595 2211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:58:14.057017 env[1307]: time="2025-05-15T00:58:14.056919564Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:58:14.058240 kubelet[2211]: I0515 00:58:14.058212 2211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:58:14.058653 env[1307]: time="2025-05-15T00:58:14.058632573Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:58:14.088777 env[1307]: time="2025-05-15T00:58:14.088693938Z" level=error msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" failed" error="failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:14.089000 kubelet[2211]: E0515 00:58:14.088937 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:58:14.089066 kubelet[2211]: E0515 00:58:14.088999 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68"} May 15 00:58:14.089066 kubelet[2211]: E0515 00:58:14.089034 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:14.089066 kubelet[2211]: E0515 00:58:14.089056 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" podUID="88d3eb5f-c3af-435c-afdd-38692e59dcc7" May 15 00:58:14.091991 env[1307]: time="2025-05-15T00:58:14.091926218Z" level=error msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" failed" error="failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:14.092134 kubelet[2211]: E0515 00:58:14.092107 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:58:14.092188 kubelet[2211]: E0515 00:58:14.092136 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087"} May 15 00:58:14.092188 kubelet[2211]: E0515 00:58:14.092157 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:14.092188 kubelet[2211]: E0515 00:58:14.092173 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lg945" podUID="04816f63-0644-4a43-8b7e-41868b6f8780" May 15 00:58:14.098086 env[1307]: time="2025-05-15T00:58:14.098049731Z" level=error msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" failed" error="failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:14.098276 kubelet[2211]: E0515 00:58:14.098244 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:58:14.098341 kubelet[2211]: E0515 00:58:14.098290 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f"} May 15 00:58:14.098341 kubelet[2211]: E0515 00:58:14.098319 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:14.098423 kubelet[2211]: E0515 00:58:14.098342 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" podUID="c73ac129-52ad-46f3-b7aa-1b4346bf3d86" May 15 00:58:14.099565 env[1307]: time="2025-05-15T00:58:14.099499195Z" level=error msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" failed" error="failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:14.099762 kubelet[2211]: E0515 00:58:14.099730 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:58:14.099762 kubelet[2211]: E0515 00:58:14.099759 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc"} May 15 00:58:14.099942 kubelet[2211]: E0515 00:58:14.099781 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:14.099942 kubelet[2211]: E0515 00:58:14.099797 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" podUID="5098666b-a231-44ec-9bf5-415e006ee772" May 15 00:58:14.101989 env[1307]: time="2025-05-15T00:58:14.101945621Z" level=error msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" failed" error="failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:14.102148 kubelet[2211]: E0515 00:58:14.102105 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:58:14.102211 kubelet[2211]: E0515 00:58:14.102156 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a"} May 15 00:58:14.102211 kubelet[2211]: E0515 00:58:14.102191 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:14.102295 kubelet[2211]: E0515 00:58:14.102226 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2cxpm" podUID="a8b9021c-44c0-4a1b-b21d-74304d9a9ec9" May 15 00:58:15.208025 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:35094.service. May 15 00:58:15.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.134:22-10.0.0.1:35094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:15.209114 kernel: kauditd_printk_skb: 7 callbacks suppressed May 15 00:58:15.209155 kernel: audit: type=1130 audit(1747270695.207:310): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.134:22-10.0.0.1:35094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:15.576000 audit[3377]: USER_ACCT pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.578450 sshd[3377]: Accepted publickey for core from 10.0.0.1 port 35094 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:15.578000 audit[3377]: CRED_ACQ pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.581846 sshd[3377]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:15.588144 kernel: audit: type=1101 audit(1747270695.576:311): pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.588249 kernel: audit: type=1103 audit(1747270695.578:312): pid=3377 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.588285 kernel: audit: type=1006 audit(1747270695.579:313): pid=3377 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 15 00:58:15.593107 kernel: audit: type=1300 audit(1747270695.579:313): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7491fbf0 a2=3 a3=0 items=0 ppid=1 pid=3377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:15.579000 audit[3377]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7491fbf0 a2=3 a3=0 items=0 ppid=1 pid=3377 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:15.588836 systemd[1]: Started session-10.scope. May 15 00:58:15.590163 systemd-logind[1293]: New session 10 of user core. May 15 00:58:15.595236 kernel: audit: type=1327 audit(1747270695.579:313): proctitle=737368643A20636F7265205B707269765D May 15 00:58:15.579000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:15.598000 audit[3377]: USER_START pid=3377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.604984 kernel: audit: type=1105 audit(1747270695.598:314): pid=3377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.604000 audit[3380]: CRED_ACQ pid=3380 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.609332 kernel: audit: type=1103 audit(1747270695.604:315): pid=3380 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.819288 sshd[3377]: pam_unix(sshd:session): session closed for user core May 15 00:58:15.819000 audit[3377]: USER_END pid=3377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.830325 kernel: audit: type=1106 audit(1747270695.819:316): pid=3377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.830479 kernel: audit: type=1104 audit(1747270695.825:317): pid=3377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.825000 audit[3377]: CRED_DISP pid=3377 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:15.829328 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:35094.service: Deactivated successfully. May 15 00:58:15.830020 systemd[1]: session-10.scope: Deactivated successfully. May 15 00:58:15.830654 systemd-logind[1293]: Session 10 logged out. Waiting for processes to exit. May 15 00:58:15.831533 systemd-logind[1293]: Removed session 10. May 15 00:58:15.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.134:22-10.0.0.1:35094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:19.093748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3532313549.mount: Deactivated successfully. May 15 00:58:20.347079 env[1307]: time="2025-05-15T00:58:20.347013438Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:20.349138 env[1307]: time="2025-05-15T00:58:20.349084979Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:20.351505 env[1307]: time="2025-05-15T00:58:20.351481171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:20.353898 env[1307]: time="2025-05-15T00:58:20.353852092Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:58:20.354264 env[1307]: time="2025-05-15T00:58:20.354221457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" May 15 00:58:20.363856 env[1307]: time="2025-05-15T00:58:20.363809704Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 00:58:20.378562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983808814.mount: Deactivated successfully. May 15 00:58:20.380678 env[1307]: time="2025-05-15T00:58:20.380637757Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10\"" May 15 00:58:20.381174 env[1307]: time="2025-05-15T00:58:20.381149647Z" level=info msg="StartContainer for \"3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10\"" May 15 00:58:20.424405 env[1307]: time="2025-05-15T00:58:20.424355558Z" level=info msg="StartContainer for \"3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10\" returns successfully" May 15 00:58:20.484627 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 15 00:58:20.484735 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 15 00:58:20.689526 env[1307]: time="2025-05-15T00:58:20.689477922Z" level=info msg="shim disconnected" id=3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10 May 15 00:58:20.689526 env[1307]: time="2025-05-15T00:58:20.689529619Z" level=warning msg="cleaning up after shim disconnected" id=3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10 namespace=k8s.io May 15 00:58:20.689750 env[1307]: time="2025-05-15T00:58:20.689539380Z" level=info msg="cleaning up dead shim" May 15 00:58:20.695701 env[1307]: time="2025-05-15T00:58:20.695652340Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:58:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3452 runtime=io.containerd.runc.v2\n" May 15 00:58:20.823359 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:42514.service. May 15 00:58:20.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.134:22-10.0.0.1:42514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:20.824675 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:58:20.824745 kernel: audit: type=1130 audit(1747270700.822:319): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.134:22-10.0.0.1:42514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:20.858000 audit[3464]: USER_ACCT pid=3464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.860069 sshd[3464]: Accepted publickey for core from 10.0.0.1 port 42514 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:20.862119 sshd[3464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:20.860000 audit[3464]: CRED_ACQ pid=3464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.865744 systemd-logind[1293]: New session 11 of user core. May 15 00:58:20.866715 systemd[1]: Started session-11.scope. May 15 00:58:20.868660 kernel: audit: type=1101 audit(1747270700.858:320): pid=3464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.868715 kernel: audit: type=1103 audit(1747270700.860:321): pid=3464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.868735 kernel: audit: type=1006 audit(1747270700.860:322): pid=3464 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 15 00:58:20.871417 kernel: audit: type=1300 audit(1747270700.860:322): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2eeddc90 a2=3 a3=0 items=0 ppid=1 pid=3464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:20.860000 audit[3464]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc2eeddc90 a2=3 a3=0 items=0 ppid=1 pid=3464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:20.875544 kernel: audit: type=1327 audit(1747270700.860:322): proctitle=737368643A20636F7265205B707269765D May 15 00:58:20.860000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:20.870000 audit[3464]: USER_START pid=3464 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.881067 kernel: audit: type=1105 audit(1747270700.870:323): pid=3464 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.881108 kernel: audit: type=1103 audit(1747270700.871:324): pid=3467 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.871000 audit[3467]: CRED_ACQ pid=3467 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.976097 sshd[3464]: pam_unix(sshd:session): session closed for user core May 15 00:58:20.976000 audit[3464]: USER_END pid=3464 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.978581 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:42520.service. May 15 00:58:20.979008 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:42514.service: Deactivated successfully. May 15 00:58:20.979588 systemd[1]: session-11.scope: Deactivated successfully. May 15 00:58:20.980839 systemd-logind[1293]: Session 11 logged out. Waiting for processes to exit. May 15 00:58:20.976000 audit[3464]: CRED_DISP pid=3464 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.981912 systemd-logind[1293]: Removed session 11. May 15 00:58:20.984823 kernel: audit: type=1106 audit(1747270700.976:325): pid=3464 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.984893 kernel: audit: type=1104 audit(1747270700.976:326): pid=3464 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:20.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.134:22-10.0.0.1:42520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:20.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.134:22-10.0.0.1:42514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:21.012000 audit[3477]: USER_ACCT pid=3477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.013649 sshd[3477]: Accepted publickey for core from 10.0.0.1 port 42520 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:21.013000 audit[3477]: CRED_ACQ pid=3477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.013000 audit[3477]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf2424700 a2=3 a3=0 items=0 ppid=1 pid=3477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:21.013000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:21.014721 sshd[3477]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:21.018167 systemd-logind[1293]: New session 12 of user core. May 15 00:58:21.018851 systemd[1]: Started session-12.scope. May 15 00:58:21.021000 audit[3477]: USER_START pid=3477 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.023000 audit[3482]: CRED_ACQ pid=3482 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.073371 kubelet[2211]: I0515 00:58:21.073040 2211 scope.go:117] "RemoveContainer" containerID="3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10" May 15 00:58:21.073371 kubelet[2211]: E0515 00:58:21.073112 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:21.076573 env[1307]: time="2025-05-15T00:58:21.076528795Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" May 15 00:58:21.092386 env[1307]: time="2025-05-15T00:58:21.092331869Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22\"" May 15 00:58:21.092827 env[1307]: time="2025-05-15T00:58:21.092790757Z" level=info msg="StartContainer for \"25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22\"" May 15 00:58:21.139054 env[1307]: time="2025-05-15T00:58:21.138991711Z" level=info msg="StartContainer for \"25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22\" returns successfully" May 15 00:58:21.161617 sshd[3477]: pam_unix(sshd:session): session closed for user core May 15 00:58:21.163119 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:42524.service. May 15 00:58:21.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.134:22-10.0.0.1:42524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:21.164000 audit[3477]: USER_END pid=3477 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.164000 audit[3477]: CRED_DISP pid=3477 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.166809 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:42520.service: Deactivated successfully. May 15 00:58:21.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.134:22-10.0.0.1:42520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:21.167675 systemd[1]: session-12.scope: Deactivated successfully. May 15 00:58:21.168624 systemd-logind[1293]: Session 12 logged out. Waiting for processes to exit. May 15 00:58:21.169614 systemd-logind[1293]: Removed session 12. May 15 00:58:21.202000 audit[3528]: USER_ACCT pid=3528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.204045 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 42524 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:21.203000 audit[3528]: CRED_ACQ pid=3528 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.203000 audit[3528]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff9660750 a2=3 a3=0 items=0 ppid=1 pid=3528 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:21.203000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:21.205039 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:21.209715 systemd[1]: Started session-13.scope. May 15 00:58:21.210743 systemd-logind[1293]: New session 13 of user core. May 15 00:58:21.214000 audit[3528]: USER_START pid=3528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.216000 audit[3546]: CRED_ACQ pid=3546 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.223801 env[1307]: time="2025-05-15T00:58:21.223763825Z" level=info msg="shim disconnected" id=25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22 May 15 00:58:21.223946 env[1307]: time="2025-05-15T00:58:21.223913023Z" level=warning msg="cleaning up after shim disconnected" id=25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22 namespace=k8s.io May 15 00:58:21.223946 env[1307]: time="2025-05-15T00:58:21.223933004Z" level=info msg="cleaning up dead shim" May 15 00:58:21.230145 env[1307]: time="2025-05-15T00:58:21.230042198Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:58:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3547 runtime=io.containerd.runc.v2\n" May 15 00:58:21.314245 sshd[3528]: pam_unix(sshd:session): session closed for user core May 15 00:58:21.314000 audit[3528]: USER_END pid=3528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.314000 audit[3528]: CRED_DISP pid=3528 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:21.316910 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:42524.service: Deactivated successfully. May 15 00:58:21.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.134:22-10.0.0.1:42524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:21.318144 systemd-logind[1293]: Session 13 logged out. Waiting for processes to exit. May 15 00:58:21.318220 systemd[1]: session-13.scope: Deactivated successfully. May 15 00:58:21.319659 systemd-logind[1293]: Removed session 13. May 15 00:58:21.359362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10-rootfs.mount: Deactivated successfully. May 15 00:58:22.077198 kubelet[2211]: I0515 00:58:22.077159 2211 scope.go:117] "RemoveContainer" containerID="3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10" May 15 00:58:22.077598 kubelet[2211]: I0515 00:58:22.077483 2211 scope.go:117] "RemoveContainer" containerID="25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22" May 15 00:58:22.077598 kubelet[2211]: E0515 00:58:22.077576 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:22.078185 env[1307]: time="2025-05-15T00:58:22.078156848Z" level=info msg="RemoveContainer for \"3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10\"" May 15 00:58:22.079630 kubelet[2211]: E0515 00:58:22.079153 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-dpnsl_calico-system(1af1a5f2-4933-4456-b057-97057326582c)\"" pod="calico-system/calico-node-dpnsl" podUID="1af1a5f2-4933-4456-b057-97057326582c" May 15 00:58:22.082541 env[1307]: time="2025-05-15T00:58:22.082502548Z" level=info msg="RemoveContainer for \"3161c98efbec26e61aa03d19e4ce38b05618c1464200990a905cdc0e59740b10\" returns successfully" May 15 00:58:23.081450 kubelet[2211]: I0515 00:58:23.081405 2211 scope.go:117] "RemoveContainer" containerID="25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22" May 15 00:58:23.081850 kubelet[2211]: E0515 00:58:23.081499 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:23.082534 kubelet[2211]: E0515 00:58:23.082506 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-dpnsl_calico-system(1af1a5f2-4933-4456-b057-97057326582c)\"" pod="calico-system/calico-node-dpnsl" podUID="1af1a5f2-4933-4456-b057-97057326582c" May 15 00:58:25.974253 env[1307]: time="2025-05-15T00:58:25.974209646Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:58:25.974887 env[1307]: time="2025-05-15T00:58:25.974838004Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:58:25.999512 env[1307]: time="2025-05-15T00:58:25.999454448Z" level=error msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" failed" error="failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:25.999800 env[1307]: time="2025-05-15T00:58:25.999735164Z" level=error msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" failed" error="failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:25.999919 kubelet[2211]: E0515 00:58:25.999861 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:58:26.000187 kubelet[2211]: E0515 00:58:25.999930 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087"} May 15 00:58:26.000187 kubelet[2211]: E0515 00:58:25.999977 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:26.000187 kubelet[2211]: E0515 00:58:26.000000 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lg945" podUID="04816f63-0644-4a43-8b7e-41868b6f8780" May 15 00:58:26.000187 kubelet[2211]: E0515 00:58:26.000066 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:58:26.000187 kubelet[2211]: E0515 00:58:26.000083 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f"} May 15 00:58:26.000365 kubelet[2211]: E0515 00:58:26.000100 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:26.000365 kubelet[2211]: E0515 00:58:26.000118 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" podUID="c73ac129-52ad-46f3-b7aa-1b4346bf3d86" May 15 00:58:26.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.134:22-10.0.0.1:42536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:26.319554 kernel: kauditd_printk_skb: 23 callbacks suppressed May 15 00:58:26.319690 kernel: audit: type=1130 audit(1747270706.317:346): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.134:22-10.0.0.1:42536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:26.318086 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:42536.service. May 15 00:58:26.350000 audit[3623]: USER_ACCT pid=3623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.351287 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 42536 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:26.353334 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:26.352000 audit[3623]: CRED_ACQ pid=3623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.356480 systemd-logind[1293]: New session 14 of user core. May 15 00:58:26.357448 systemd[1]: Started session-14.scope. May 15 00:58:26.358663 kernel: audit: type=1101 audit(1747270706.350:347): pid=3623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.358723 kernel: audit: type=1103 audit(1747270706.352:348): pid=3623 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.358761 kernel: audit: type=1006 audit(1747270706.352:349): pid=3623 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 May 15 00:58:26.352000 audit[3623]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3b47efb0 a2=3 a3=0 items=0 ppid=1 pid=3623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:26.365140 kernel: audit: type=1300 audit(1747270706.352:349): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3b47efb0 a2=3 a3=0 items=0 ppid=1 pid=3623 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:26.365187 kernel: audit: type=1327 audit(1747270706.352:349): proctitle=737368643A20636F7265205B707269765D May 15 00:58:26.352000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:26.360000 audit[3623]: USER_START pid=3623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.370826 kernel: audit: type=1105 audit(1747270706.360:350): pid=3623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.370915 kernel: audit: type=1103 audit(1747270706.361:351): pid=3626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.361000 audit[3626]: CRED_ACQ pid=3626 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.457382 sshd[3623]: pam_unix(sshd:session): session closed for user core May 15 00:58:26.457000 audit[3623]: USER_END pid=3623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.459307 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:42536.service: Deactivated successfully. May 15 00:58:26.460296 systemd-logind[1293]: Session 14 logged out. Waiting for processes to exit. May 15 00:58:26.460348 systemd[1]: session-14.scope: Deactivated successfully. May 15 00:58:26.461090 systemd-logind[1293]: Removed session 14. May 15 00:58:26.457000 audit[3623]: CRED_DISP pid=3623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.466018 kernel: audit: type=1106 audit(1747270706.457:352): pid=3623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.466069 kernel: audit: type=1104 audit(1747270706.457:353): pid=3623 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:26.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.134:22-10.0.0.1:42536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:27.974515 env[1307]: time="2025-05-15T00:58:27.974470728Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:58:27.975244 env[1307]: time="2025-05-15T00:58:27.974470728Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:58:27.975565 env[1307]: time="2025-05-15T00:58:27.974535030Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:58:28.002057 env[1307]: time="2025-05-15T00:58:28.001999405Z" level=error msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" failed" error="failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:28.002545 kubelet[2211]: E0515 00:58:28.002482 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:58:28.002882 kubelet[2211]: E0515 00:58:28.002559 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc"} May 15 00:58:28.002882 kubelet[2211]: E0515 00:58:28.002592 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:28.002882 kubelet[2211]: E0515 00:58:28.002615 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" podUID="5098666b-a231-44ec-9bf5-415e006ee772" May 15 00:58:28.003036 env[1307]: time="2025-05-15T00:58:28.002782682Z" level=error msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" failed" error="failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:28.003071 kubelet[2211]: E0515 00:58:28.002976 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:58:28.003071 kubelet[2211]: E0515 00:58:28.003003 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68"} May 15 00:58:28.003071 kubelet[2211]: E0515 00:58:28.003023 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:28.003071 kubelet[2211]: E0515 00:58:28.003047 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" podUID="88d3eb5f-c3af-435c-afdd-38692e59dcc7" May 15 00:58:28.010607 env[1307]: time="2025-05-15T00:58:28.010566604Z" level=error msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" failed" error="failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:28.010728 kubelet[2211]: E0515 00:58:28.010699 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:58:28.010780 kubelet[2211]: E0515 00:58:28.010740 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a"} May 15 00:58:28.010780 kubelet[2211]: E0515 00:58:28.010764 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:28.010854 kubelet[2211]: E0515 00:58:28.010782 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2cxpm" podUID="a8b9021c-44c0-4a1b-b21d-74304d9a9ec9" May 15 00:58:28.973912 env[1307]: time="2025-05-15T00:58:28.973874830Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:58:28.993911 env[1307]: time="2025-05-15T00:58:28.993857932Z" level=error msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" failed" error="failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:28.994264 kubelet[2211]: E0515 00:58:28.994070 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:58:28.994264 kubelet[2211]: E0515 00:58:28.994115 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b"} May 15 00:58:28.994264 kubelet[2211]: E0515 00:58:28.994146 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:28.994264 kubelet[2211]: E0515 00:58:28.994166 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:31.461433 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:38970.service. May 15 00:58:31.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.134:22-10.0.0.1:38970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:31.462594 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:58:31.462720 kernel: audit: type=1130 audit(1747270711.460:355): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.134:22-10.0.0.1:38970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:31.495000 audit[3732]: USER_ACCT pid=3732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.496321 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 38970 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:31.498372 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:31.497000 audit[3732]: CRED_ACQ pid=3732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.501472 systemd-logind[1293]: New session 15 of user core. May 15 00:58:31.502175 systemd[1]: Started session-15.scope. May 15 00:58:31.503637 kernel: audit: type=1101 audit(1747270711.495:356): pid=3732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.503701 kernel: audit: type=1103 audit(1747270711.497:357): pid=3732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.503723 kernel: audit: type=1006 audit(1747270711.497:358): pid=3732 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 15 00:58:31.497000 audit[3732]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecb0d63b0 a2=3 a3=0 items=0 ppid=1 pid=3732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:31.509832 kernel: audit: type=1300 audit(1747270711.497:358): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecb0d63b0 a2=3 a3=0 items=0 ppid=1 pid=3732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:31.509910 kernel: audit: type=1327 audit(1747270711.497:358): proctitle=737368643A20636F7265205B707269765D May 15 00:58:31.497000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:31.508000 audit[3732]: USER_START pid=3732 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.515516 kernel: audit: type=1105 audit(1747270711.508:359): pid=3732 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.515559 kernel: audit: type=1103 audit(1747270711.509:360): pid=3735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.509000 audit[3735]: CRED_ACQ pid=3735 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.620690 sshd[3732]: pam_unix(sshd:session): session closed for user core May 15 00:58:31.620000 audit[3732]: USER_END pid=3732 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.622917 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:38970.service: Deactivated successfully. May 15 00:58:31.623997 systemd[1]: session-15.scope: Deactivated successfully. May 15 00:58:31.624422 systemd-logind[1293]: Session 15 logged out. Waiting for processes to exit. May 15 00:58:31.625263 systemd-logind[1293]: Removed session 15. May 15 00:58:31.620000 audit[3732]: CRED_DISP pid=3732 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.639877 kernel: audit: type=1106 audit(1747270711.620:361): pid=3732 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.639925 kernel: audit: type=1104 audit(1747270711.620:362): pid=3732 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:31.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.134:22-10.0.0.1:38970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:35.974177 kubelet[2211]: I0515 00:58:35.974134 2211 scope.go:117] "RemoveContainer" containerID="25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22" May 15 00:58:35.974521 kubelet[2211]: E0515 00:58:35.974220 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:35.976465 env[1307]: time="2025-05-15T00:58:35.976423239Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" May 15 00:58:35.988808 env[1307]: time="2025-05-15T00:58:35.988776999Z" level=info msg="CreateContainer within sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599\"" May 15 00:58:35.989203 env[1307]: time="2025-05-15T00:58:35.989184544Z" level=info msg="StartContainer for \"29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599\"" May 15 00:58:36.028837 env[1307]: time="2025-05-15T00:58:36.028791717Z" level=info msg="StartContainer for \"29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599\" returns successfully" May 15 00:58:36.089893 env[1307]: time="2025-05-15T00:58:36.089846924Z" level=info msg="shim disconnected" id=29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599 May 15 00:58:36.090080 env[1307]: time="2025-05-15T00:58:36.089896225Z" level=warning msg="cleaning up after shim disconnected" id=29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599 namespace=k8s.io May 15 00:58:36.090080 env[1307]: time="2025-05-15T00:58:36.089904882Z" level=info msg="cleaning up dead shim" May 15 00:58:36.095311 env[1307]: time="2025-05-15T00:58:36.095258967Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:58:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3799 runtime=io.containerd.runc.v2\n" May 15 00:58:36.103507 kubelet[2211]: I0515 00:58:36.103489 2211 scope.go:117] "RemoveContainer" containerID="25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22" May 15 00:58:36.103842 kubelet[2211]: I0515 00:58:36.103813 2211 scope.go:117] "RemoveContainer" containerID="29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599" May 15 00:58:36.104723 kubelet[2211]: E0515 00:58:36.103914 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:36.104723 kubelet[2211]: E0515 00:58:36.104309 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-dpnsl_calico-system(1af1a5f2-4933-4456-b057-97057326582c)\"" pod="calico-system/calico-node-dpnsl" podUID="1af1a5f2-4933-4456-b057-97057326582c" May 15 00:58:36.104810 env[1307]: time="2025-05-15T00:58:36.104451834Z" level=info msg="RemoveContainer for \"25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22\"" May 15 00:58:36.107787 env[1307]: time="2025-05-15T00:58:36.107756576Z" level=info msg="RemoveContainer for \"25874aa3c7ae73e547587fc77760e30053a694961edb76772df1fbcec58c3b22\" returns successfully" May 15 00:58:36.624029 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:39516.service. May 15 00:58:36.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.134:22-10.0.0.1:39516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:36.625234 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:58:36.625345 kernel: audit: type=1130 audit(1747270716.623:364): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.134:22-10.0.0.1:39516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:36.658000 audit[3811]: USER_ACCT pid=3811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.659304 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 39516 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:36.661434 sshd[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:36.660000 audit[3811]: CRED_ACQ pid=3811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.664658 systemd-logind[1293]: New session 16 of user core. May 15 00:58:36.665352 systemd[1]: Started session-16.scope. May 15 00:58:36.667430 kernel: audit: type=1101 audit(1747270716.658:365): pid=3811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.667548 kernel: audit: type=1103 audit(1747270716.660:366): pid=3811 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.670070 kernel: audit: type=1006 audit(1747270716.660:367): pid=3811 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 15 00:58:36.670123 kernel: audit: type=1300 audit(1747270716.660:367): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0288220 a2=3 a3=0 items=0 ppid=1 pid=3811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:36.660000 audit[3811]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0288220 a2=3 a3=0 items=0 ppid=1 pid=3811 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:36.660000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:36.675849 kernel: audit: type=1327 audit(1747270716.660:367): proctitle=737368643A20636F7265205B707269765D May 15 00:58:36.675880 kernel: audit: type=1105 audit(1747270716.668:368): pid=3811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.668000 audit[3811]: USER_START pid=3811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.680428 kernel: audit: type=1103 audit(1747270716.670:369): pid=3814 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.670000 audit[3814]: CRED_ACQ pid=3814 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.768370 sshd[3811]: pam_unix(sshd:session): session closed for user core May 15 00:58:36.768000 audit[3811]: USER_END pid=3811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.770536 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:39516.service: Deactivated successfully. May 15 00:58:36.771267 systemd[1]: session-16.scope: Deactivated successfully. May 15 00:58:36.771990 systemd-logind[1293]: Session 16 logged out. Waiting for processes to exit. May 15 00:58:36.772608 systemd-logind[1293]: Removed session 16. May 15 00:58:36.768000 audit[3811]: CRED_DISP pid=3811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.777815 kernel: audit: type=1106 audit(1747270716.768:370): pid=3811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.777877 kernel: audit: type=1104 audit(1747270716.768:371): pid=3811 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:36.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.134:22-10.0.0.1:39516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:36.985375 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599-rootfs.mount: Deactivated successfully. May 15 00:58:37.974099 env[1307]: time="2025-05-15T00:58:37.974051506Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:58:37.997832 env[1307]: time="2025-05-15T00:58:37.997767597Z" level=error msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" failed" error="failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:37.998024 kubelet[2211]: E0515 00:58:37.997987 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:58:37.998257 kubelet[2211]: E0515 00:58:37.998041 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087"} May 15 00:58:37.998257 kubelet[2211]: E0515 00:58:37.998077 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:37.998257 kubelet[2211]: E0515 00:58:37.998099 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lg945" podUID="04816f63-0644-4a43-8b7e-41868b6f8780" May 15 00:58:40.974455 env[1307]: time="2025-05-15T00:58:40.974403247Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:58:40.974823 env[1307]: time="2025-05-15T00:58:40.974404068Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:58:40.996190 env[1307]: time="2025-05-15T00:58:40.996122697Z" level=error msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" failed" error="failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:40.996431 kubelet[2211]: E0515 00:58:40.996363 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:58:40.996672 kubelet[2211]: E0515 00:58:40.996430 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f"} May 15 00:58:40.996672 kubelet[2211]: E0515 00:58:40.996471 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:40.996672 kubelet[2211]: E0515 00:58:40.996501 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" podUID="c73ac129-52ad-46f3-b7aa-1b4346bf3d86" May 15 00:58:41.001755 env[1307]: time="2025-05-15T00:58:41.001702242Z" level=error msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" failed" error="failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:41.001924 kubelet[2211]: E0515 00:58:41.001876 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:58:41.002007 kubelet[2211]: E0515 00:58:41.001931 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc"} May 15 00:58:41.002007 kubelet[2211]: E0515 00:58:41.001978 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:41.002112 kubelet[2211]: E0515 00:58:41.002007 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" podUID="5098666b-a231-44ec-9bf5-415e006ee772" May 15 00:58:41.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.134:22-10.0.0.1:39520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:41.772413 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:39520.service. May 15 00:58:41.773496 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:58:41.773549 kernel: audit: type=1130 audit(1747270721.771:373): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.134:22-10.0.0.1:39520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:41.804000 audit[3899]: USER_ACCT pid=3899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.805594 sshd[3899]: Accepted publickey for core from 10.0.0.1 port 39520 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:41.806987 sshd[3899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:41.805000 audit[3899]: CRED_ACQ pid=3899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.810186 systemd-logind[1293]: New session 17 of user core. May 15 00:58:41.811145 systemd[1]: Started session-17.scope. May 15 00:58:41.813241 kernel: audit: type=1101 audit(1747270721.804:374): pid=3899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.813295 kernel: audit: type=1103 audit(1747270721.805:375): pid=3899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.813339 kernel: audit: type=1006 audit(1747270721.805:376): pid=3899 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 15 00:58:41.805000 audit[3899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2b0ed540 a2=3 a3=0 items=0 ppid=1 pid=3899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:41.819364 kernel: audit: type=1300 audit(1747270721.805:376): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2b0ed540 a2=3 a3=0 items=0 ppid=1 pid=3899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:41.819409 kernel: audit: type=1327 audit(1747270721.805:376): proctitle=737368643A20636F7265205B707269765D May 15 00:58:41.805000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:41.820680 kernel: audit: type=1105 audit(1747270721.814:377): pid=3899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.814000 audit[3899]: USER_START pid=3899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.824841 kernel: audit: type=1103 audit(1747270721.815:378): pid=3902 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.815000 audit[3902]: CRED_ACQ pid=3902 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.908713 sshd[3899]: pam_unix(sshd:session): session closed for user core May 15 00:58:41.908000 audit[3899]: USER_END pid=3899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.911278 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:39520.service: Deactivated successfully. May 15 00:58:41.912369 systemd[1]: session-17.scope: Deactivated successfully. May 15 00:58:41.912808 systemd-logind[1293]: Session 17 logged out. Waiting for processes to exit. May 15 00:58:41.913678 systemd-logind[1293]: Removed session 17. May 15 00:58:41.908000 audit[3899]: CRED_DISP pid=3899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.917375 kernel: audit: type=1106 audit(1747270721.908:379): pid=3899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.917431 kernel: audit: type=1104 audit(1747270721.908:380): pid=3899 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:41.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.134:22-10.0.0.1:39520 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:41.974123 env[1307]: time="2025-05-15T00:58:41.974070488Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:58:41.974552 env[1307]: time="2025-05-15T00:58:41.974518451Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:58:41.997762 env[1307]: time="2025-05-15T00:58:41.997697412Z" level=error msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" failed" error="failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:41.998006 kubelet[2211]: E0515 00:58:41.997941 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:58:41.998316 kubelet[2211]: E0515 00:58:41.998026 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b"} May 15 00:58:41.998316 kubelet[2211]: E0515 00:58:41.998077 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:41.998316 kubelet[2211]: E0515 00:58:41.998108 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:42.000944 env[1307]: time="2025-05-15T00:58:42.000879282Z" level=error msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" failed" error="failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:42.001159 kubelet[2211]: E0515 00:58:42.001037 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:58:42.001159 kubelet[2211]: E0515 00:58:42.001071 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68"} May 15 00:58:42.001159 kubelet[2211]: E0515 00:58:42.001097 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:42.001159 kubelet[2211]: E0515 00:58:42.001120 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" podUID="88d3eb5f-c3af-435c-afdd-38692e59dcc7" May 15 00:58:42.704327 kubelet[2211]: I0515 00:58:42.704275 2211 scope.go:117] "RemoveContainer" containerID="29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599" May 15 00:58:42.704542 kubelet[2211]: E0515 00:58:42.704373 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:42.704838 kubelet[2211]: E0515 00:58:42.704804 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-dpnsl_calico-system(1af1a5f2-4933-4456-b057-97057326582c)\"" pod="calico-system/calico-node-dpnsl" podUID="1af1a5f2-4933-4456-b057-97057326582c" May 15 00:58:42.974946 env[1307]: time="2025-05-15T00:58:42.974633572Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:58:42.996485 env[1307]: time="2025-05-15T00:58:42.996423558Z" level=error msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" failed" error="failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:42.996720 kubelet[2211]: E0515 00:58:42.996666 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:58:42.996792 kubelet[2211]: E0515 00:58:42.996723 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a"} May 15 00:58:42.996792 kubelet[2211]: E0515 00:58:42.996756 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:42.996792 kubelet[2211]: E0515 00:58:42.996783 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2cxpm" podUID="a8b9021c-44c0-4a1b-b21d-74304d9a9ec9" May 15 00:58:46.911904 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:39238.service. May 15 00:58:46.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.134:22-10.0.0.1:39238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:46.916155 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:58:46.916258 kernel: audit: type=1130 audit(1747270726.911:382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.134:22-10.0.0.1:39238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:46.943000 audit[3984]: USER_ACCT pid=3984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:46.944826 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 39238 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:46.946349 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:46.945000 audit[3984]: CRED_ACQ pid=3984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:46.949797 systemd-logind[1293]: New session 18 of user core. May 15 00:58:46.950594 systemd[1]: Started session-18.scope. May 15 00:58:46.952119 kernel: audit: type=1101 audit(1747270726.943:383): pid=3984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:46.952175 kernel: audit: type=1103 audit(1747270726.945:384): pid=3984 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:46.954434 kernel: audit: type=1006 audit(1747270726.945:385): pid=3984 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 15 00:58:46.954478 kernel: audit: type=1300 audit(1747270726.945:385): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd673846e0 a2=3 a3=0 items=0 ppid=1 pid=3984 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:46.945000 audit[3984]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd673846e0 a2=3 a3=0 items=0 ppid=1 pid=3984 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:46.945000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:46.959704 kernel: audit: type=1327 audit(1747270726.945:385): proctitle=737368643A20636F7265205B707269765D May 15 00:58:46.959735 kernel: audit: type=1105 audit(1747270726.953:386): pid=3984 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:46.953000 audit[3984]: USER_START pid=3984 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:46.954000 audit[3987]: CRED_ACQ pid=3987 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:46.967228 kernel: audit: type=1103 audit(1747270726.954:387): pid=3987 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:47.047224 sshd[3984]: pam_unix(sshd:session): session closed for user core May 15 00:58:47.047000 audit[3984]: USER_END pid=3984 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:47.047000 audit[3984]: CRED_DISP pid=3984 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:47.053142 kubelet[2211]: I0515 00:58:47.048373 2211 scope.go:117] "RemoveContainer" containerID="29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599" May 15 00:58:47.053142 kubelet[2211]: E0515 00:58:47.048441 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:58:47.053142 kubelet[2211]: E0515 00:58:47.048791 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-dpnsl_calico-system(1af1a5f2-4933-4456-b057-97057326582c)\"" pod="calico-system/calico-node-dpnsl" podUID="1af1a5f2-4933-4456-b057-97057326582c" May 15 00:58:47.049229 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:39238.service: Deactivated successfully. May 15 00:58:47.049886 systemd[1]: session-18.scope: Deactivated successfully. May 15 00:58:47.055587 systemd-logind[1293]: Session 18 logged out. Waiting for processes to exit. May 15 00:58:47.055857 kernel: audit: type=1106 audit(1747270727.047:388): pid=3984 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:47.056005 kernel: audit: type=1104 audit(1747270727.047:389): pid=3984 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:47.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.134:22-10.0.0.1:39238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:47.056500 systemd-logind[1293]: Removed session 18. May 15 00:58:48.974826 env[1307]: time="2025-05-15T00:58:48.974746691Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:58:48.997404 env[1307]: time="2025-05-15T00:58:48.997346769Z" level=error msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" failed" error="failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:48.997635 kubelet[2211]: E0515 00:58:48.997576 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:58:48.997881 kubelet[2211]: E0515 00:58:48.997641 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087"} May 15 00:58:48.997881 kubelet[2211]: E0515 00:58:48.997674 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:48.997881 kubelet[2211]: E0515 00:58:48.997697 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04816f63-0644-4a43-8b7e-41868b6f8780\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lg945" podUID="04816f63-0644-4a43-8b7e-41868b6f8780" May 15 00:58:52.049870 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:39252.service. May 15 00:58:52.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.134:22-10.0.0.1:39252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:52.050995 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:58:52.051107 kernel: audit: type=1130 audit(1747270732.049:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.134:22-10.0.0.1:39252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:52.087000 audit[4022]: USER_ACCT pid=4022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.088329 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 39252 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:52.090157 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:52.088000 audit[4022]: CRED_ACQ pid=4022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.093688 systemd-logind[1293]: New session 19 of user core. May 15 00:58:52.094498 systemd[1]: Started session-19.scope. May 15 00:58:52.097013 kernel: audit: type=1101 audit(1747270732.087:392): pid=4022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.097071 kernel: audit: type=1103 audit(1747270732.088:393): pid=4022 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.097098 kernel: audit: type=1006 audit(1747270732.088:394): pid=4022 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 May 15 00:58:52.099972 kernel: audit: type=1300 audit(1747270732.088:394): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcab4466b0 a2=3 a3=0 items=0 ppid=1 pid=4022 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:52.088000 audit[4022]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcab4466b0 a2=3 a3=0 items=0 ppid=1 pid=4022 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:52.088000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:52.105519 kernel: audit: type=1327 audit(1747270732.088:394): proctitle=737368643A20636F7265205B707269765D May 15 00:58:52.105574 kernel: audit: type=1105 audit(1747270732.098:395): pid=4022 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.098000 audit[4022]: USER_START pid=4022 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.109752 kernel: audit: type=1103 audit(1747270732.100:396): pid=4025 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.100000 audit[4025]: CRED_ACQ pid=4025 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.192033 sshd[4022]: pam_unix(sshd:session): session closed for user core May 15 00:58:52.191000 audit[4022]: USER_END pid=4022 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.193909 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:39252.service: Deactivated successfully. May 15 00:58:52.195046 systemd[1]: session-19.scope: Deactivated successfully. May 15 00:58:52.195097 systemd-logind[1293]: Session 19 logged out. Waiting for processes to exit. May 15 00:58:52.196027 systemd-logind[1293]: Removed session 19. May 15 00:58:52.191000 audit[4022]: CRED_DISP pid=4022 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.200639 kernel: audit: type=1106 audit(1747270732.191:397): pid=4022 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.200704 kernel: audit: type=1104 audit(1747270732.191:398): pid=4022 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:52.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.134:22-10.0.0.1:39252 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:53.974533 env[1307]: time="2025-05-15T00:58:53.974489845Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:58:53.974892 env[1307]: time="2025-05-15T00:58:53.974544563Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:58:53.974892 env[1307]: time="2025-05-15T00:58:53.974799976Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:58:53.974892 env[1307]: time="2025-05-15T00:58:53.974496606Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:58:54.005697 env[1307]: time="2025-05-15T00:58:54.005618583Z" level=error msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" failed" error="failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:54.005977 kubelet[2211]: E0515 00:58:54.005911 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:58:54.006253 kubelet[2211]: E0515 00:58:54.006002 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc"} May 15 00:58:54.006253 kubelet[2211]: E0515 00:58:54.006065 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:54.006253 kubelet[2211]: E0515 00:58:54.006097 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5098666b-a231-44ec-9bf5-415e006ee772\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" podUID="5098666b-a231-44ec-9bf5-415e006ee772" May 15 00:58:54.010532 env[1307]: time="2025-05-15T00:58:54.010474971Z" level=error msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" failed" error="failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:54.010789 kubelet[2211]: E0515 00:58:54.010755 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:58:54.010849 kubelet[2211]: E0515 00:58:54.010796 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68"} May 15 00:58:54.010849 kubelet[2211]: E0515 00:58:54.010820 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:54.010849 kubelet[2211]: E0515 00:58:54.010838 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88d3eb5f-c3af-435c-afdd-38692e59dcc7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" podUID="88d3eb5f-c3af-435c-afdd-38692e59dcc7" May 15 00:58:54.011215 env[1307]: time="2025-05-15T00:58:54.011185619Z" level=error msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" failed" error="failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:54.011349 kubelet[2211]: E0515 00:58:54.011327 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:58:54.011427 kubelet[2211]: E0515 00:58:54.011352 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f"} May 15 00:58:54.011427 kubelet[2211]: E0515 00:58:54.011378 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:54.011427 kubelet[2211]: E0515 00:58:54.011395 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c73ac129-52ad-46f3-b7aa-1b4346bf3d86\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" podUID="c73ac129-52ad-46f3-b7aa-1b4346bf3d86" May 15 00:58:54.013678 env[1307]: time="2025-05-15T00:58:54.013626536Z" level=error msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" failed" error="failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:54.013766 kubelet[2211]: E0515 00:58:54.013729 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:58:54.013822 kubelet[2211]: E0515 00:58:54.013759 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b"} May 15 00:58:54.013822 kubelet[2211]: E0515 00:58:54.013788 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:54.013822 kubelet[2211]: E0515 00:58:54.013804 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"234fff70-d82a-4012-9e49-d23446deada6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pk5fw" podUID="234fff70-d82a-4012-9e49-d23446deada6" May 15 00:58:55.974682 env[1307]: time="2025-05-15T00:58:55.974603691Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:58:55.994362 env[1307]: time="2025-05-15T00:58:55.994293304Z" level=error msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" failed" error="failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 15 00:58:55.994566 kubelet[2211]: E0515 00:58:55.994526 2211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:58:55.994839 kubelet[2211]: E0515 00:58:55.994578 2211 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a"} May 15 00:58:55.994839 kubelet[2211]: E0515 00:58:55.994609 2211 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 15 00:58:55.994839 kubelet[2211]: E0515 00:58:55.994631 2211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-2cxpm" podUID="a8b9021c-44c0-4a1b-b21d-74304d9a9ec9" May 15 00:58:57.194554 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:36300.service. May 15 00:58:57.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.134:22-10.0.0.1:36300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:57.195946 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:58:57.196022 kernel: audit: type=1130 audit(1747270737.193:400): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.134:22-10.0.0.1:36300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:58:57.231000 audit[4154]: USER_ACCT pid=4154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.232948 sshd[4154]: Accepted publickey for core from 10.0.0.1 port 36300 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:58:57.234528 sshd[4154]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:58:57.233000 audit[4154]: CRED_ACQ pid=4154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.238003 systemd-logind[1293]: New session 20 of user core. May 15 00:58:57.238869 systemd[1]: Started session-20.scope. May 15 00:58:57.241971 kernel: audit: type=1101 audit(1747270737.231:401): pid=4154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.242028 kernel: audit: type=1103 audit(1747270737.233:402): pid=4154 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.242063 kernel: audit: type=1006 audit(1747270737.233:403): pid=4154 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 May 15 00:58:57.245050 kernel: audit: type=1300 audit(1747270737.233:403): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc2aac7d0 a2=3 a3=0 items=0 ppid=1 pid=4154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:57.233000 audit[4154]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc2aac7d0 a2=3 a3=0 items=0 ppid=1 pid=4154 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:58:57.233000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:58:57.250124 kernel: audit: type=1327 audit(1747270737.233:403): proctitle=737368643A20636F7265205B707269765D May 15 00:58:57.250201 kernel: audit: type=1105 audit(1747270737.242:404): pid=4154 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.242000 audit[4154]: USER_START pid=4154 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.243000 audit[4157]: CRED_ACQ pid=4157 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.257933 kernel: audit: type=1103 audit(1747270737.243:405): pid=4157 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.339075 sshd[4154]: pam_unix(sshd:session): session closed for user core May 15 00:58:57.339000 audit[4154]: USER_END pid=4154 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.341530 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:36300.service: Deactivated successfully. May 15 00:58:57.342244 systemd[1]: session-20.scope: Deactivated successfully. May 15 00:58:57.343051 systemd-logind[1293]: Session 20 logged out. Waiting for processes to exit. May 15 00:58:57.343670 systemd-logind[1293]: Removed session 20. May 15 00:58:57.339000 audit[4154]: CRED_DISP pid=4154 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.347781 kernel: audit: type=1106 audit(1747270737.339:406): pid=4154 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.347857 kernel: audit: type=1104 audit(1747270737.339:407): pid=4154 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:58:57.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.134:22-10.0.0.1:36300 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:01.433588 env[1307]: time="2025-05-15T00:59:01.433523903Z" level=info msg="StopPodSandbox for \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\"" May 15 00:59:01.433588 env[1307]: time="2025-05-15T00:59:01.433591136Z" level=info msg="Container to stop \"662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:59:01.434004 env[1307]: time="2025-05-15T00:59:01.433604852Z" level=info msg="Container to stop \"4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:59:01.434004 env[1307]: time="2025-05-15T00:59:01.433616254Z" level=info msg="Container to stop \"29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 00:59:01.436091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66-shm.mount: Deactivated successfully. May 15 00:59:01.466056 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66-rootfs.mount: Deactivated successfully. May 15 00:59:01.478214 env[1307]: time="2025-05-15T00:59:01.478173518Z" level=info msg="shim disconnected" id=6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66 May 15 00:59:01.478214 env[1307]: time="2025-05-15T00:59:01.478213031Z" level=warning msg="cleaning up after shim disconnected" id=6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66 namespace=k8s.io May 15 00:59:01.478410 env[1307]: time="2025-05-15T00:59:01.478221617Z" level=info msg="cleaning up dead shim" May 15 00:59:01.485454 env[1307]: time="2025-05-15T00:59:01.485409364Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:59:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4189 runtime=io.containerd.runc.v2\ntime=\"2025-05-15T00:59:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" May 15 00:59:01.485895 env[1307]: time="2025-05-15T00:59:01.485856058Z" level=info msg="TearDown network for sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" successfully" May 15 00:59:01.485895 env[1307]: time="2025-05-15T00:59:01.485888648Z" level=info msg="StopPodSandbox for \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" returns successfully" May 15 00:59:01.519975 kubelet[2211]: I0515 00:59:01.518186 2211 topology_manager.go:215] "Topology Admit Handler" podUID="f05aecfe-a999-41e7-a471-e92205ab2946" podNamespace="calico-system" podName="calico-node-dmpwv" May 15 00:59:01.519975 kubelet[2211]: E0515 00:59:01.518251 2211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="calico-node" May 15 00:59:01.519975 kubelet[2211]: E0515 00:59:01.518259 2211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="flexvol-driver" May 15 00:59:01.519975 kubelet[2211]: E0515 00:59:01.518264 2211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="install-cni" May 15 00:59:01.519975 kubelet[2211]: E0515 00:59:01.518269 2211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="calico-node" May 15 00:59:01.519975 kubelet[2211]: I0515 00:59:01.518294 2211 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="calico-node" May 15 00:59:01.519975 kubelet[2211]: I0515 00:59:01.518302 2211 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="calico-node" May 15 00:59:01.519975 kubelet[2211]: I0515 00:59:01.518307 2211 memory_manager.go:354] "RemoveStaleState removing state" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="calico-node" May 15 00:59:01.519975 kubelet[2211]: E0515 00:59:01.518325 2211 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1af1a5f2-4933-4456-b057-97057326582c" containerName="calico-node" May 15 00:59:01.655240 kubelet[2211]: I0515 00:59:01.655192 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1af1a5f2-4933-4456-b057-97057326582c-node-certs\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655240 kubelet[2211]: I0515 00:59:01.655222 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-lib-modules\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655422 kubelet[2211]: I0515 00:59:01.655256 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-xtables-lock\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655422 kubelet[2211]: I0515 00:59:01.655271 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-policysync\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655422 kubelet[2211]: I0515 00:59:01.655292 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzxhc\" (UniqueName: \"kubernetes.io/projected/1af1a5f2-4933-4456-b057-97057326582c-kube-api-access-vzxhc\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655422 kubelet[2211]: I0515 00:59:01.655308 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1af1a5f2-4933-4456-b057-97057326582c-tigera-ca-bundle\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655422 kubelet[2211]: I0515 00:59:01.655321 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-run-calico\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655422 kubelet[2211]: I0515 00:59:01.655333 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-lib-calico\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655566 kubelet[2211]: I0515 00:59:01.655347 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-log-dir\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655566 kubelet[2211]: I0515 00:59:01.655361 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-bin-dir\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655566 kubelet[2211]: I0515 00:59:01.655373 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-flexvol-driver-host\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655566 kubelet[2211]: I0515 00:59:01.655385 2211 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-net-dir\") pod \"1af1a5f2-4933-4456-b057-97057326582c\" (UID: \"1af1a5f2-4933-4456-b057-97057326582c\") " May 15 00:59:01.655566 kubelet[2211]: I0515 00:59:01.655438 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-xtables-lock\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655566 kubelet[2211]: I0515 00:59:01.655458 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk4sd\" (UniqueName: \"kubernetes.io/projected/f05aecfe-a999-41e7-a471-e92205ab2946-kube-api-access-hk4sd\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655707 kubelet[2211]: I0515 00:59:01.655473 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-var-lib-calico\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655707 kubelet[2211]: I0515 00:59:01.655489 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-lib-modules\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655707 kubelet[2211]: I0515 00:59:01.655503 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-var-run-calico\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655707 kubelet[2211]: I0515 00:59:01.655517 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f05aecfe-a999-41e7-a471-e92205ab2946-tigera-ca-bundle\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655707 kubelet[2211]: I0515 00:59:01.655532 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-flexvol-driver-host\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655829 kubelet[2211]: I0515 00:59:01.655521 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.655829 kubelet[2211]: I0515 00:59:01.655546 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-cni-log-dir\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655829 kubelet[2211]: I0515 00:59:01.655561 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-policysync\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655829 kubelet[2211]: I0515 00:59:01.655573 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.655829 kubelet[2211]: I0515 00:59:01.655576 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-cni-bin-dir\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655978 kubelet[2211]: I0515 00:59:01.655590 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.655978 kubelet[2211]: I0515 00:59:01.655592 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f05aecfe-a999-41e7-a471-e92205ab2946-cni-net-dir\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655978 kubelet[2211]: I0515 00:59:01.655610 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-policysync" (OuterVolumeSpecName: "policysync") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.655978 kubelet[2211]: I0515 00:59:01.655624 2211 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f05aecfe-a999-41e7-a471-e92205ab2946-node-certs\") pod \"calico-node-dmpwv\" (UID: \"f05aecfe-a999-41e7-a471-e92205ab2946\") " pod="calico-system/calico-node-dmpwv" May 15 00:59:01.655978 kubelet[2211]: I0515 00:59:01.655647 2211 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-lib-calico\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.655978 kubelet[2211]: I0515 00:59:01.655655 2211 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-policysync\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.656129 kubelet[2211]: I0515 00:59:01.655665 2211 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.656129 kubelet[2211]: I0515 00:59:01.655673 2211 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.656129 kubelet[2211]: I0515 00:59:01.655705 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.656129 kubelet[2211]: I0515 00:59:01.655725 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.656129 kubelet[2211]: I0515 00:59:01.655738 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.656244 kubelet[2211]: I0515 00:59:01.655751 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.656244 kubelet[2211]: I0515 00:59:01.655973 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 00:59:01.658734 kubelet[2211]: I0515 00:59:01.658704 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1af1a5f2-4933-4456-b057-97057326582c-node-certs" (OuterVolumeSpecName: "node-certs") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 00:59:01.658815 kubelet[2211]: I0515 00:59:01.658788 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1af1a5f2-4933-4456-b057-97057326582c-kube-api-access-vzxhc" (OuterVolumeSpecName: "kube-api-access-vzxhc") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "kube-api-access-vzxhc". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 00:59:01.660660 kubelet[2211]: I0515 00:59:01.660634 2211 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1af1a5f2-4933-4456-b057-97057326582c-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1af1a5f2-4933-4456-b057-97057326582c" (UID: "1af1a5f2-4933-4456-b057-97057326582c"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 00:59:01.661855 systemd[1]: var-lib-kubelet-pods-1af1a5f2\x2d4933\x2d4456\x2db057\x2d97057326582c-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 15 00:59:01.662032 systemd[1]: var-lib-kubelet-pods-1af1a5f2\x2d4933\x2d4456\x2db057\x2d97057326582c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvzxhc.mount: Deactivated successfully. May 15 00:59:01.662115 systemd[1]: var-lib-kubelet-pods-1af1a5f2\x2d4933\x2d4456\x2db057\x2d97057326582c-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.756933 2211 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1af1a5f2-4933-4456-b057-97057326582c-node-certs\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.757003 2211 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-vzxhc\" (UniqueName: \"kubernetes.io/projected/1af1a5f2-4933-4456-b057-97057326582c-kube-api-access-vzxhc\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.757019 2211 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.757030 2211 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1af1a5f2-4933-4456-b057-97057326582c-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.757041 2211 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-var-run-calico\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.757050 2211 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-log-dir\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.757059 2211 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.757831 kubelet[2211]: I0515 00:59:01.757070 2211 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1af1a5f2-4933-4456-b057-97057326582c-cni-net-dir\") on node \"localhost\" DevicePath \"\"" May 15 00:59:01.821554 kubelet[2211]: E0515 00:59:01.821512 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:01.822155 env[1307]: time="2025-05-15T00:59:01.822086126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmpwv,Uid:f05aecfe-a999-41e7-a471-e92205ab2946,Namespace:calico-system,Attempt:0,}" May 15 00:59:01.834990 env[1307]: time="2025-05-15T00:59:01.834537442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:59:01.834990 env[1307]: time="2025-05-15T00:59:01.834632677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:59:01.834990 env[1307]: time="2025-05-15T00:59:01.834662402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:59:01.835203 env[1307]: time="2025-05-15T00:59:01.835141777Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6 pid=4214 runtime=io.containerd.runc.v2 May 15 00:59:01.865034 env[1307]: time="2025-05-15T00:59:01.864981144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dmpwv,Uid:f05aecfe-a999-41e7-a471-e92205ab2946,Namespace:calico-system,Attempt:0,} returns sandbox id \"8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6\"" May 15 00:59:01.865604 kubelet[2211]: E0515 00:59:01.865585 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:01.867305 env[1307]: time="2025-05-15T00:59:01.867270547Z" level=info msg="CreateContainer within sandbox \"8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 15 00:59:01.880271 env[1307]: time="2025-05-15T00:59:01.880229149Z" level=info msg="CreateContainer within sandbox \"8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"85f00e92960ee05fa13590853594b8dc6ef7a256b782075e98335f5fb2717692\"" May 15 00:59:01.880634 env[1307]: time="2025-05-15T00:59:01.880603008Z" level=info msg="StartContainer for \"85f00e92960ee05fa13590853594b8dc6ef7a256b782075e98335f5fb2717692\"" May 15 00:59:01.928140 env[1307]: time="2025-05-15T00:59:01.928097571Z" level=info msg="StartContainer for \"85f00e92960ee05fa13590853594b8dc6ef7a256b782075e98335f5fb2717692\" returns successfully" May 15 00:59:01.978070 env[1307]: time="2025-05-15T00:59:01.978025141Z" level=info msg="shim disconnected" id=85f00e92960ee05fa13590853594b8dc6ef7a256b782075e98335f5fb2717692 May 15 00:59:01.978070 env[1307]: time="2025-05-15T00:59:01.978068010Z" level=warning msg="cleaning up after shim disconnected" id=85f00e92960ee05fa13590853594b8dc6ef7a256b782075e98335f5fb2717692 namespace=k8s.io May 15 00:59:01.978070 env[1307]: time="2025-05-15T00:59:01.978077206Z" level=info msg="cleaning up dead shim" May 15 00:59:01.983669 env[1307]: time="2025-05-15T00:59:01.983615943Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:59:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4299 runtime=io.containerd.runc.v2\n" May 15 00:59:02.147205 kubelet[2211]: E0515 00:59:02.147120 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:02.148883 env[1307]: time="2025-05-15T00:59:02.148853424Z" level=info msg="CreateContainer within sandbox \"8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 15 00:59:02.151591 kubelet[2211]: I0515 00:59:02.150975 2211 scope.go:117] "RemoveContainer" containerID="29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599" May 15 00:59:02.152601 env[1307]: time="2025-05-15T00:59:02.152569303Z" level=info msg="RemoveContainer for \"29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599\"" May 15 00:59:02.157432 env[1307]: time="2025-05-15T00:59:02.157395924Z" level=info msg="RemoveContainer for \"29d6f39efcbbf9b7302319ca7780becf2991a294e895beedcf540cee5cb9a599\" returns successfully" May 15 00:59:02.157574 kubelet[2211]: I0515 00:59:02.157540 2211 scope.go:117] "RemoveContainer" containerID="4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a" May 15 00:59:02.158523 env[1307]: time="2025-05-15T00:59:02.158498031Z" level=info msg="RemoveContainer for \"4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a\"" May 15 00:59:02.168032 env[1307]: time="2025-05-15T00:59:02.167048665Z" level=info msg="CreateContainer within sandbox \"8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b1cdbf0dfd6a1fb517944de9a54c5db328246c44299cbae2efa2eadd53e213b\"" May 15 00:59:02.168032 env[1307]: time="2025-05-15T00:59:02.167549731Z" level=info msg="StartContainer for \"8b1cdbf0dfd6a1fb517944de9a54c5db328246c44299cbae2efa2eadd53e213b\"" May 15 00:59:02.168371 env[1307]: time="2025-05-15T00:59:02.168333179Z" level=info msg="RemoveContainer for \"4e2e68724be8f6f5c866585bf8a8883f314048526bc440324061c923598c509a\" returns successfully" May 15 00:59:02.169164 kubelet[2211]: I0515 00:59:02.169118 2211 scope.go:117] "RemoveContainer" containerID="662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4" May 15 00:59:02.170233 env[1307]: time="2025-05-15T00:59:02.170208176Z" level=info msg="RemoveContainer for \"662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4\"" May 15 00:59:02.174682 env[1307]: time="2025-05-15T00:59:02.174307111Z" level=info msg="RemoveContainer for \"662f2a1d7576cdd1d0b783d6735eeb245357f1e8fdcf71df3157505a089e67e4\" returns successfully" May 15 00:59:02.219059 env[1307]: time="2025-05-15T00:59:02.218999298Z" level=info msg="StartContainer for \"8b1cdbf0dfd6a1fb517944de9a54c5db328246c44299cbae2efa2eadd53e213b\" returns successfully" May 15 00:59:02.342291 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:36316.service. May 15 00:59:02.347611 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:59:02.347667 kernel: audit: type=1130 audit(1747270742.341:409): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.134:22-10.0.0.1:36316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:02.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.134:22-10.0.0.1:36316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:02.379375 sshd[4352]: Accepted publickey for core from 10.0.0.1 port 36316 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:02.378000 audit[4352]: USER_ACCT pid=4352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.379000 audit[4352]: CRED_ACQ pid=4352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.383655 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:02.387346 kernel: audit: type=1101 audit(1747270742.378:410): pid=4352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.387392 kernel: audit: type=1103 audit(1747270742.379:411): pid=4352 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.389971 kernel: audit: type=1006 audit(1747270742.379:412): pid=4352 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 May 15 00:59:02.387905 systemd[1]: Started session-21.scope. May 15 00:59:02.388310 systemd-logind[1293]: New session 21 of user core. May 15 00:59:02.379000 audit[4352]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6ab84310 a2=3 a3=0 items=0 ppid=1 pid=4352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:02.397084 kernel: audit: type=1300 audit(1747270742.379:412): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6ab84310 a2=3 a3=0 items=0 ppid=1 pid=4352 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:02.379000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:02.391000 audit[4352]: USER_START pid=4352 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.403174 kernel: audit: type=1327 audit(1747270742.379:412): proctitle=737368643A20636F7265205B707269765D May 15 00:59:02.403223 kernel: audit: type=1105 audit(1747270742.391:413): pid=4352 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.403242 kernel: audit: type=1103 audit(1747270742.392:414): pid=4355 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.392000 audit[4355]: CRED_ACQ pid=4355 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.506242 sshd[4352]: pam_unix(sshd:session): session closed for user core May 15 00:59:02.507000 audit[4352]: USER_END pid=4352 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.511396 systemd-logind[1293]: Session 21 logged out. Waiting for processes to exit. May 15 00:59:02.511504 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:36316.service: Deactivated successfully. May 15 00:59:02.512168 systemd[1]: session-21.scope: Deactivated successfully. May 15 00:59:02.512617 systemd-logind[1293]: Removed session 21. May 15 00:59:02.517224 kernel: audit: type=1106 audit(1747270742.507:415): pid=4352 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.517363 kernel: audit: type=1104 audit(1747270742.508:416): pid=4352 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.508000 audit[4352]: CRED_DISP pid=4352 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:02.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.134:22-10.0.0.1:36316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:02.648025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b1cdbf0dfd6a1fb517944de9a54c5db328246c44299cbae2efa2eadd53e213b-rootfs.mount: Deactivated successfully. May 15 00:59:02.746633 env[1307]: time="2025-05-15T00:59:02.746521279Z" level=info msg="shim disconnected" id=8b1cdbf0dfd6a1fb517944de9a54c5db328246c44299cbae2efa2eadd53e213b May 15 00:59:02.746633 env[1307]: time="2025-05-15T00:59:02.746571381Z" level=warning msg="cleaning up after shim disconnected" id=8b1cdbf0dfd6a1fb517944de9a54c5db328246c44299cbae2efa2eadd53e213b namespace=k8s.io May 15 00:59:02.746633 env[1307]: time="2025-05-15T00:59:02.746579727Z" level=info msg="cleaning up dead shim" May 15 00:59:02.752755 env[1307]: time="2025-05-15T00:59:02.752721168Z" level=warning msg="cleanup warnings time=\"2025-05-15T00:59:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4380 runtime=io.containerd.runc.v2\n" May 15 00:59:03.155117 kubelet[2211]: E0515 00:59:03.155085 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:03.165123 env[1307]: time="2025-05-15T00:59:03.165077098Z" level=info msg="CreateContainer within sandbox \"8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 15 00:59:03.256769 env[1307]: time="2025-05-15T00:59:03.256715374Z" level=info msg="CreateContainer within sandbox \"8945b2fd029b8bb088baa0f53eb646e646d2b872bf5ab82528de512b169b80e6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cb4fbb827787720ce10d4fff7f0a1c9ada9d5aa257c26f145077154d2d07b693\"" May 15 00:59:03.257324 env[1307]: time="2025-05-15T00:59:03.257191156Z" level=info msg="StartContainer for \"cb4fbb827787720ce10d4fff7f0a1c9ada9d5aa257c26f145077154d2d07b693\"" May 15 00:59:03.302147 env[1307]: time="2025-05-15T00:59:03.302107867Z" level=info msg="StartContainer for \"cb4fbb827787720ce10d4fff7f0a1c9ada9d5aa257c26f145077154d2d07b693\" returns successfully" May 15 00:59:03.974013 env[1307]: time="2025-05-15T00:59:03.973968127Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:59:03.980803 kubelet[2211]: I0515 00:59:03.980756 2211 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1af1a5f2-4933-4456-b057-97057326582c" path="/var/lib/kubelet/pods/1af1a5f2-4933-4456-b057-97057326582c/volumes" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.015 [INFO][4467] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.015 [INFO][4467] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" iface="eth0" netns="/var/run/netns/cni-b2add557-c923-6fe6-712d-1723bd645d66" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.016 [INFO][4467] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" iface="eth0" netns="/var/run/netns/cni-b2add557-c923-6fe6-712d-1723bd645d66" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.016 [INFO][4467] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" iface="eth0" netns="/var/run/netns/cni-b2add557-c923-6fe6-712d-1723bd645d66" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.016 [INFO][4467] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.016 [INFO][4467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.033 [INFO][4475] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.033 [INFO][4475] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.033 [INFO][4475] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.037 [WARNING][4475] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.037 [INFO][4475] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.040 [INFO][4475] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:04.043114 env[1307]: 2025-05-15 00:59:04.041 [INFO][4467] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:04.043610 env[1307]: time="2025-05-15T00:59:04.043258917Z" level=info msg="TearDown network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" successfully" May 15 00:59:04.043610 env[1307]: time="2025-05-15T00:59:04.043289113Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" returns successfully" May 15 00:59:04.043663 kubelet[2211]: E0515 00:59:04.043609 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:04.044502 env[1307]: time="2025-05-15T00:59:04.044472619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lg945,Uid:04816f63-0644-4a43-8b7e-41868b6f8780,Namespace:kube-system,Attempt:1,}" May 15 00:59:04.045477 systemd[1]: run-netns-cni\x2db2add557\x2dc923\x2d6fe6\x2d712d\x2d1723bd645d66.mount: Deactivated successfully. May 15 00:59:04.159530 kubelet[2211]: E0515 00:59:04.159495 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:04.205360 systemd-networkd[1089]: cali8b04d6857ed: Link UP May 15 00:59:04.208263 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:59:04.208313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8b04d6857ed: link becomes ready May 15 00:59:04.208153 systemd-networkd[1089]: cali8b04d6857ed: Gained carrier May 15 00:59:04.218194 kubelet[2211]: I0515 00:59:04.217926 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dmpwv" podStartSLOduration=3.2179098 podStartE2EDuration="3.2179098s" podCreationTimestamp="2025-05-15 00:59:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:59:04.175645727 +0000 UTC m=+86.298484798" watchObservedRunningTime="2025-05-15 00:59:04.2179098 +0000 UTC m=+86.340748871" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.124 [INFO][4484] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.131 [INFO][4484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--lg945-eth0 coredns-7db6d8ff4d- kube-system 04816f63-0644-4a43-8b7e-41868b6f8780 1099 0 2025-05-15 00:57:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-lg945 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8b04d6857ed [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.131 [INFO][4484] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.160 [INFO][4499] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" HandleID="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.169 [INFO][4499] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" HandleID="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030fc60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-lg945", "timestamp":"2025-05-15 00:59:04.160885846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.169 [INFO][4499] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.169 [INFO][4499] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.169 [INFO][4499] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.176 [INFO][4499] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.182 [INFO][4499] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.185 [INFO][4499] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.187 [INFO][4499] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.188 [INFO][4499] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.188 [INFO][4499] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.189 [INFO][4499] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30 May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.192 [INFO][4499] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.196 [INFO][4499] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.196 [INFO][4499] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" host="localhost" May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.196 [INFO][4499] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:04.219581 env[1307]: 2025-05-15 00:59:04.196 [INFO][4499] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" HandleID="k8s-pod-network.c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.220134 env[1307]: 2025-05-15 00:59:04.197 [INFO][4484] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lg945-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"04816f63-0644-4a43-8b7e-41868b6f8780", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-lg945", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b04d6857ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:04.220134 env[1307]: 2025-05-15 00:59:04.197 [INFO][4484] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.220134 env[1307]: 2025-05-15 00:59:04.197 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b04d6857ed ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.220134 env[1307]: 2025-05-15 00:59:04.208 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.220134 env[1307]: 2025-05-15 00:59:04.208 [INFO][4484] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lg945-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"04816f63-0644-4a43-8b7e-41868b6f8780", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30", Pod:"coredns-7db6d8ff4d-lg945", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b04d6857ed", MAC:"2e:c8:39:6b:2a:23", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:04.220134 env[1307]: 2025-05-15 00:59:04.217 [INFO][4484] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lg945" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:04.235068 env[1307]: time="2025-05-15T00:59:04.233139292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:59:04.235068 env[1307]: time="2025-05-15T00:59:04.233179236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:59:04.235068 env[1307]: time="2025-05-15T00:59:04.233191388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:59:04.235068 env[1307]: time="2025-05-15T00:59:04.233557107Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30 pid=4550 runtime=io.containerd.runc.v2 May 15 00:59:04.253452 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:59:04.273034 env[1307]: time="2025-05-15T00:59:04.272999424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lg945,Uid:04816f63-0644-4a43-8b7e-41868b6f8780,Namespace:kube-system,Attempt:1,} returns sandbox id \"c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30\"" May 15 00:59:04.274403 kubelet[2211]: E0515 00:59:04.273862 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:04.276161 env[1307]: time="2025-05-15T00:59:04.276126196Z" level=info msg="CreateContainer within sandbox \"c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:59:04.288850 env[1307]: time="2025-05-15T00:59:04.288816206Z" level=info msg="CreateContainer within sandbox \"c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4ec38549101f7e31a8d6e4e6eaa8931e97d1889ba57baa2f21d15df1c341ff3\"" May 15 00:59:04.289917 env[1307]: time="2025-05-15T00:59:04.289339016Z" level=info msg="StartContainer for \"d4ec38549101f7e31a8d6e4e6eaa8931e97d1889ba57baa2f21d15df1c341ff3\"" May 15 00:59:04.326974 env[1307]: time="2025-05-15T00:59:04.326915840Z" level=info msg="StartContainer for \"d4ec38549101f7e31a8d6e4e6eaa8931e97d1889ba57baa2f21d15df1c341ff3\" returns successfully" May 15 00:59:04.522000 audit[4660]: AVC avc: denied { write } for pid=4660 comm="tee" name="fd" dev="proc" ino=27247 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 00:59:04.522000 audit[4660]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc83da0a26 a2=241 a3=1b6 items=1 ppid=4631 pid=4660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.522000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 15 00:59:04.522000 audit: PATH item=0 name="/dev/fd/63" inode=28713 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:59:04.522000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 00:59:04.530000 audit[4677]: AVC avc: denied { write } for pid=4677 comm="tee" name="fd" dev="proc" ino=28287 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 00:59:04.530000 audit[4677]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffde5beba17 a2=241 a3=1b6 items=1 ppid=4630 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.530000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 15 00:59:04.530000 audit: PATH item=0 name="/dev/fd/63" inode=28721 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:59:04.530000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 00:59:04.539000 audit[4681]: AVC avc: denied { write } for pid=4681 comm="tee" name="fd" dev="proc" ino=28297 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 00:59:04.539000 audit[4681]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe53d79a26 a2=241 a3=1b6 items=1 ppid=4640 pid=4681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.539000 audit: CWD cwd="/etc/service/enabled/confd/log" May 15 00:59:04.539000 audit: PATH item=0 name="/dev/fd/63" inode=27259 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:59:04.539000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 00:59:04.554000 audit[4679]: AVC avc: denied { write } for pid=4679 comm="tee" name="fd" dev="proc" ino=28727 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 00:59:04.554000 audit[4679]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd18459a28 a2=241 a3=1b6 items=1 ppid=4629 pid=4679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.554000 audit: CWD cwd="/etc/service/enabled/cni/log" May 15 00:59:04.554000 audit: PATH item=0 name="/dev/fd/63" inode=25428 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:59:04.554000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 00:59:04.565000 audit[4694]: AVC avc: denied { write } for pid=4694 comm="tee" name="fd" dev="proc" ino=28731 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 00:59:04.565000 audit[4694]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcbe265a16 a2=241 a3=1b6 items=1 ppid=4643 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.565000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 15 00:59:04.565000 audit: PATH item=0 name="/dev/fd/63" inode=28724 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:59:04.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 00:59:04.565000 audit[4692]: AVC avc: denied { write } for pid=4692 comm="tee" name="fd" dev="proc" ino=28309 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 00:59:04.565000 audit[4692]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff9dc5a27 a2=241 a3=1b6 items=1 ppid=4636 pid=4692 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.565000 audit: CWD cwd="/etc/service/enabled/bird/log" May 15 00:59:04.565000 audit: PATH item=0 name="/dev/fd/63" inode=28293 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:59:04.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 00:59:04.581000 audit[4702]: AVC avc: denied { write } for pid=4702 comm="tee" name="fd" dev="proc" ino=28313 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 15 00:59:04.581000 audit[4702]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe9152ba26 a2=241 a3=1b6 items=1 ppid=4645 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.581000 audit: CWD cwd="/etc/service/enabled/felix/log" May 15 00:59:04.581000 audit: PATH item=0 name="/dev/fd/63" inode=25430 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 00:59:04.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.648000 audit: BPF prog-id=10 op=LOAD May 15 00:59:04.648000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc3f739660 a2=98 a3=3 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.648000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.649000 audit: BPF prog-id=10 op=UNLOAD May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit: BPF prog-id=11 op=LOAD May 15 00:59:04.650000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc3f739440 a2=74 a3=540051 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.650000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.650000 audit: BPF prog-id=11 op=UNLOAD May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.650000 audit: BPF prog-id=12 op=LOAD May 15 00:59:04.650000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc3f739470 a2=94 a3=2 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.650000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.650000 audit: BPF prog-id=12 op=UNLOAD May 15 00:59:04.755000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit: BPF prog-id=13 op=LOAD May 15 00:59:04.755000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc3f739330 a2=40 a3=1 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.755000 audit: BPF prog-id=13 op=UNLOAD May 15 00:59:04.755000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.755000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc3f739400 a2=50 a3=7ffc3f7394e0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.755000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc3f739340 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc3f739370 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc3f739280 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc3f739390 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc3f739370 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc3f739360 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc3f739390 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc3f739370 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc3f739390 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc3f739360 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.762000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.762000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc3f7393d0 a2=28 a3=0 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.762000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc3f739180 a2=50 a3=1 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit: BPF prog-id=14 op=LOAD May 15 00:59:04.763000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc3f739180 a2=94 a3=5 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.763000 audit: BPF prog-id=14 op=UNLOAD May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc3f739230 a2=50 a3=1 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc3f739350 a2=4 a3=38 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { confidentiality } for pid=4746 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 00:59:04.763000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc3f7393a0 a2=94 a3=6 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { confidentiality } for pid=4746 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 00:59:04.763000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc3f738b50 a2=94 a3=83 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { perfmon } for pid=4746 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { bpf } for pid=4746 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.763000 audit[4746]: AVC avc: denied { confidentiality } for pid=4746 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 00:59:04.763000 audit[4746]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc3f738b50 a2=94 a3=83 items=0 ppid=4647 pid=4746 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.763000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit: BPF prog-id=15 op=LOAD May 15 00:59:04.770000 audit[4751]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb8984190 a2=98 a3=1999999999999999 items=0 ppid=4647 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 00:59:04.770000 audit: BPF prog-id=15 op=UNLOAD May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit: BPF prog-id=16 op=LOAD May 15 00:59:04.770000 audit[4751]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb8984070 a2=74 a3=ffff items=0 ppid=4647 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 00:59:04.770000 audit: BPF prog-id=16 op=UNLOAD May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { perfmon } for pid=4751 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit[4751]: AVC avc: denied { bpf } for pid=4751 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.770000 audit: BPF prog-id=17 op=LOAD May 15 00:59:04.770000 audit[4751]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffdb89840b0 a2=40 a3=7ffdb8984290 items=0 ppid=4647 pid=4751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.770000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 15 00:59:04.770000 audit: BPF prog-id=17 op=UNLOAD May 15 00:59:04.807993 systemd-networkd[1089]: vxlan.calico: Link UP May 15 00:59:04.808002 systemd-networkd[1089]: vxlan.calico: Gained carrier May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit: BPF prog-id=18 op=LOAD May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc9520c1b0 a2=98 a3=ffffffff items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit: BPF prog-id=18 op=UNLOAD May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit: BPF prog-id=19 op=LOAD May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc9520bfc0 a2=74 a3=540051 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit: BPF prog-id=19 op=UNLOAD May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit: BPF prog-id=20 op=LOAD May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc9520bff0 a2=94 a3=2 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit: BPF prog-id=20 op=UNLOAD May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc9520bec0 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc9520bef0 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc9520be00 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc9520bf10 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc9520bef0 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc9520bee0 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc9520bf10 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc9520bef0 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc9520bf10 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc9520bee0 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc9520bf50 a2=28 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.821000 audit: BPF prog-id=21 op=LOAD May 15 00:59:04.821000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc9520bdc0 a2=40 a3=0 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.821000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.821000 audit: BPF prog-id=21 op=UNLOAD May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc9520bdb0 a2=50 a3=2800 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.822000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc9520bdb0 a2=50 a3=2800 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.822000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit: BPF prog-id=22 op=LOAD May 15 00:59:04.822000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc9520b5d0 a2=94 a3=2 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.822000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.822000 audit: BPF prog-id=22 op=UNLOAD May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { perfmon } for pid=4778 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit[4778]: AVC avc: denied { bpf } for pid=4778 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.822000 audit: BPF prog-id=23 op=LOAD May 15 00:59:04.822000 audit[4778]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc9520b6d0 a2=94 a3=30 items=0 ppid=4647 pid=4778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.822000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit: BPF prog-id=24 op=LOAD May 15 00:59:04.829000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffffd04bb70 a2=98 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.829000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.829000 audit: BPF prog-id=24 op=UNLOAD May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit: BPF prog-id=25 op=LOAD May 15 00:59:04.829000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffd04b950 a2=74 a3=540051 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.829000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.829000 audit: BPF prog-id=25 op=UNLOAD May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.829000 audit: BPF prog-id=26 op=LOAD May 15 00:59:04.829000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffd04b980 a2=94 a3=2 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.829000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.829000 audit: BPF prog-id=26 op=UNLOAD May 15 00:59:04.933000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit: BPF prog-id=27 op=LOAD May 15 00:59:04.933000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffffd04b840 a2=40 a3=1 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.933000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.933000 audit: BPF prog-id=27 op=UNLOAD May 15 00:59:04.933000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.933000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffffd04b910 a2=50 a3=7ffffd04b9f0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.933000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffd04b850 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffffd04b880 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffffd04b790 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffd04b8a0 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffd04b880 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffd04b870 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffd04b8a0 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffffd04b880 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffffd04b8a0 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffffd04b870 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.940000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.940000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffffd04b8e0 a2=28 a3=0 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.940000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffffd04b690 a2=50 a3=1 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit: BPF prog-id=28 op=LOAD May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffffd04b690 a2=94 a3=5 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit: BPF prog-id=28 op=UNLOAD May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffffd04b740 a2=50 a3=1 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffffd04b860 a2=4 a3=38 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { confidentiality } for pid=4787 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffffd04b8b0 a2=94 a3=6 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { confidentiality } for pid=4787 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffffd04b060 a2=94 a3=83 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { perfmon } for pid=4787 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { confidentiality } for pid=4787 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffffd04b060 a2=94 a3=83 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.941000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.941000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffffd04caa0 a2=10 a3=208 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.941000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.942000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.942000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffffd04c940 a2=10 a3=3 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.942000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.942000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.942000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffffd04c8e0 a2=10 a3=3 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.942000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.942000 audit[4787]: AVC avc: denied { bpf } for pid=4787 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 15 00:59:04.942000 audit[4787]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffffd04c8e0 a2=10 a3=7 items=0 ppid=4647 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.942000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 15 00:59:04.949000 audit: BPF prog-id=23 op=UNLOAD May 15 00:59:04.973921 kubelet[2211]: E0515 00:59:04.973873 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:04.974136 kubelet[2211]: E0515 00:59:04.973998 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:04.974500 env[1307]: time="2025-05-15T00:59:04.974455468Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:59:04.998000 audit[4832]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:04.998000 audit[4832]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc1a09e230 a2=0 a3=7ffc1a09e21c items=0 ppid=4647 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:04.998000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:05.005000 audit[4831]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=4831 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:05.005000 audit[4831]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffc49151430 a2=0 a3=7ffc4915141c items=0 ppid=4647 pid=4831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.005000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:05.007000 audit[4834]: NETFILTER_CFG table=filter:99 family=2 entries=69 op=nft_register_chain pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:05.007000 audit[4834]: SYSCALL arch=c000003e syscall=46 success=yes exit=36404 a0=3 a1=7ffcf0527410 a2=0 a3=7ffcf05273fc items=0 ppid=4647 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.007000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:05.009000 audit[4830]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:05.009000 audit[4830]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffebf9bbdb0 a2=0 a3=7ffebf9bbd9c items=0 ppid=4647 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.009000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.028 [INFO][4818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.029 [INFO][4818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" iface="eth0" netns="/var/run/netns/cni-41ec64c6-2fce-6998-5d9b-d2844d24083e" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.029 [INFO][4818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" iface="eth0" netns="/var/run/netns/cni-41ec64c6-2fce-6998-5d9b-d2844d24083e" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.029 [INFO][4818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" iface="eth0" netns="/var/run/netns/cni-41ec64c6-2fce-6998-5d9b-d2844d24083e" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.029 [INFO][4818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.029 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.048 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.048 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.048 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.053 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.053 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.054 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:05.057746 env[1307]: 2025-05-15 00:59:05.055 [INFO][4818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:05.058173 env[1307]: time="2025-05-15T00:59:05.057868966Z" level=info msg="TearDown network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" successfully" May 15 00:59:05.058173 env[1307]: time="2025-05-15T00:59:05.057898661Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" returns successfully" May 15 00:59:05.058671 env[1307]: time="2025-05-15T00:59:05.058570110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-vxtzh,Uid:c73ac129-52ad-46f3-b7aa-1b4346bf3d86,Namespace:calico-apiserver,Attempt:1,}" May 15 00:59:05.060172 systemd[1]: run-netns-cni\x2d41ec64c6\x2d2fce\x2d6998\x2d5d9b\x2dd2844d24083e.mount: Deactivated successfully. May 15 00:59:05.159235 systemd-networkd[1089]: cali1260027e89b: Link UP May 15 00:59:05.161079 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1260027e89b: link becomes ready May 15 00:59:05.161158 systemd-networkd[1089]: cali1260027e89b: Gained carrier May 15 00:59:05.166572 kubelet[2211]: E0515 00:59:05.166424 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:05.167371 kubelet[2211]: E0515 00:59:05.167337 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.107 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0 calico-apiserver-67fbb64cb9- calico-apiserver c73ac129-52ad-46f3-b7aa-1b4346bf3d86 1130 0 2025-05-15 00:58:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fbb64cb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67fbb64cb9-vxtzh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1260027e89b [] []}} ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.107 [INFO][4853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.131 [INFO][4865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" HandleID="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.137 [INFO][4865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" HandleID="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005c9c60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67fbb64cb9-vxtzh", "timestamp":"2025-05-15 00:59:05.131134669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.137 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.137 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.137 [INFO][4865] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.138 [INFO][4865] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.141 [INFO][4865] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.144 [INFO][4865] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.145 [INFO][4865] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.146 [INFO][4865] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.146 [INFO][4865] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.147 [INFO][4865] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357 May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.151 [INFO][4865] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.155 [INFO][4865] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.155 [INFO][4865] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" host="localhost" May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.155 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:05.175301 env[1307]: 2025-05-15 00:59:05.155 [INFO][4865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" HandleID="k8s-pod-network.a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.175873 env[1307]: 2025-05-15 00:59:05.157 [INFO][4853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c73ac129-52ad-46f3-b7aa-1b4346bf3d86", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67fbb64cb9-vxtzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1260027e89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:05.175873 env[1307]: 2025-05-15 00:59:05.157 [INFO][4853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.175873 env[1307]: 2025-05-15 00:59:05.157 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1260027e89b ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.175873 env[1307]: 2025-05-15 00:59:05.161 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.175873 env[1307]: 2025-05-15 00:59:05.161 [INFO][4853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c73ac129-52ad-46f3-b7aa-1b4346bf3d86", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357", Pod:"calico-apiserver-67fbb64cb9-vxtzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1260027e89b", MAC:"5e:d9:f9:59:5b:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:05.175873 env[1307]: 2025-05-15 00:59:05.173 [INFO][4853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-vxtzh" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:05.179316 kubelet[2211]: I0515 00:59:05.178700 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lg945" podStartSLOduration=71.178684339 podStartE2EDuration="1m11.178684339s" podCreationTimestamp="2025-05-15 00:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:59:05.178533548 +0000 UTC m=+87.301372619" watchObservedRunningTime="2025-05-15 00:59:05.178684339 +0000 UTC m=+87.301523410" May 15 00:59:05.201000 audit[4898]: NETFILTER_CFG table=filter:101 family=2 entries=16 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:05.201000 audit[4898]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffca1c00f0 a2=0 a3=7fffca1c00dc items=0 ppid=2416 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:05.209302 env[1307]: time="2025-05-15T00:59:05.209240733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:59:05.209302 env[1307]: time="2025-05-15T00:59:05.209280707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:59:05.209492 env[1307]: time="2025-05-15T00:59:05.209430014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:59:05.209755 env[1307]: time="2025-05-15T00:59:05.209705427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357 pid=4913 runtime=io.containerd.runc.v2 May 15 00:59:05.209000 audit[4898]: NETFILTER_CFG table=nat:102 family=2 entries=14 op=nft_register_rule pid=4898 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:05.209000 audit[4898]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fffca1c00f0 a2=0 a3=0 items=0 ppid=2416 pid=4898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:05.219000 audit[4938]: NETFILTER_CFG table=filter:103 family=2 entries=44 op=nft_register_chain pid=4938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:05.219000 audit[4938]: SYSCALL arch=c000003e syscall=46 success=yes exit=24680 a0=3 a1=7ffcd8f2c690 a2=0 a3=7ffcd8f2c67c items=0 ppid=4647 pid=4938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.219000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:05.228000 audit[4947]: NETFILTER_CFG table=filter:104 family=2 entries=13 op=nft_register_rule pid=4947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:05.228000 audit[4947]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd3f31e8c0 a2=0 a3=7ffd3f31e8ac items=0 ppid=2416 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:05.231855 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:59:05.235000 audit[4947]: NETFILTER_CFG table=nat:105 family=2 entries=35 op=nft_register_chain pid=4947 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:05.235000 audit[4947]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd3f31e8c0 a2=0 a3=7ffd3f31e8ac items=0 ppid=2416 pid=4947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:05.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:05.254720 env[1307]: time="2025-05-15T00:59:05.254659891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-vxtzh,Uid:c73ac129-52ad-46f3-b7aa-1b4346bf3d86,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357\"" May 15 00:59:05.256325 env[1307]: time="2025-05-15T00:59:05.256161673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 15 00:59:05.437249 systemd[1]: run-containerd-runc-k8s.io-cb4fbb827787720ce10d4fff7f0a1c9ada9d5aa257c26f145077154d2d07b693-runc.pGO8sw.mount: Deactivated successfully. May 15 00:59:05.974341 env[1307]: time="2025-05-15T00:59:05.974278214Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.012 [INFO][4974] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.012 [INFO][4974] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" iface="eth0" netns="/var/run/netns/cni-d367bafc-7bff-40c6-d443-adfbf0f4a8d1" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.012 [INFO][4974] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" iface="eth0" netns="/var/run/netns/cni-d367bafc-7bff-40c6-d443-adfbf0f4a8d1" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.012 [INFO][4974] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" iface="eth0" netns="/var/run/netns/cni-d367bafc-7bff-40c6-d443-adfbf0f4a8d1" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.012 [INFO][4974] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.012 [INFO][4974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.032 [INFO][4982] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.032 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.032 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.037 [WARNING][4982] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.037 [INFO][4982] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.038 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:06.043009 env[1307]: 2025-05-15 00:59:06.040 [INFO][4974] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:06.043621 env[1307]: time="2025-05-15T00:59:06.043142749Z" level=info msg="TearDown network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" successfully" May 15 00:59:06.043621 env[1307]: time="2025-05-15T00:59:06.043173937Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" returns successfully" May 15 00:59:06.043770 env[1307]: time="2025-05-15T00:59:06.043739361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c78b9db48-dl2b8,Uid:5098666b-a231-44ec-9bf5-415e006ee772,Namespace:calico-system,Attempt:1,}" May 15 00:59:06.045630 systemd[1]: run-netns-cni\x2dd367bafc\x2d7bff\x2d40c6\x2dd443\x2dadfbf0f4a8d1.mount: Deactivated successfully. May 15 00:59:06.144516 systemd-networkd[1089]: calied4c1f0edea: Link UP May 15 00:59:06.146557 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:59:06.146597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calied4c1f0edea: link becomes ready May 15 00:59:06.146816 systemd-networkd[1089]: calied4c1f0edea: Gained carrier May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.087 [INFO][4990] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0 calico-kube-controllers-c78b9db48- calico-system 5098666b-a231-44ec-9bf5-415e006ee772 1153 0 2025-05-15 00:58:01 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c78b9db48 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c78b9db48-dl2b8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied4c1f0edea [] []}} ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.087 [INFO][4990] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.113 [INFO][5005] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" HandleID="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.120 [INFO][5005] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" HandleID="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030bc30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c78b9db48-dl2b8", "timestamp":"2025-05-15 00:59:06.113259047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.120 [INFO][5005] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.120 [INFO][5005] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.120 [INFO][5005] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.122 [INFO][5005] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.125 [INFO][5005] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.129 [INFO][5005] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.130 [INFO][5005] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.132 [INFO][5005] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.132 [INFO][5005] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.133 [INFO][5005] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.136 [INFO][5005] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.140 [INFO][5005] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.140 [INFO][5005] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" host="localhost" May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.140 [INFO][5005] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:06.156347 env[1307]: 2025-05-15 00:59:06.140 [INFO][5005] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" HandleID="k8s-pod-network.75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.156933 env[1307]: 2025-05-15 00:59:06.142 [INFO][4990] cni-plugin/k8s.go 386: Populated endpoint ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0", GenerateName:"calico-kube-controllers-c78b9db48-", Namespace:"calico-system", SelfLink:"", UID:"5098666b-a231-44ec-9bf5-415e006ee772", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c78b9db48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c78b9db48-dl2b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied4c1f0edea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:06.156933 env[1307]: 2025-05-15 00:59:06.142 [INFO][4990] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.156933 env[1307]: 2025-05-15 00:59:06.142 [INFO][4990] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied4c1f0edea ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.156933 env[1307]: 2025-05-15 00:59:06.147 [INFO][4990] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.156933 env[1307]: 2025-05-15 00:59:06.147 [INFO][4990] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0", GenerateName:"calico-kube-controllers-c78b9db48-", Namespace:"calico-system", SelfLink:"", UID:"5098666b-a231-44ec-9bf5-415e006ee772", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c78b9db48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d", Pod:"calico-kube-controllers-c78b9db48-dl2b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied4c1f0edea", MAC:"42:98:81:ec:92:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:06.156933 env[1307]: 2025-05-15 00:59:06.155 [INFO][4990] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d" Namespace="calico-system" Pod="calico-kube-controllers-c78b9db48-dl2b8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:06.162000 audit[5022]: NETFILTER_CFG table=filter:106 family=2 entries=42 op=nft_register_chain pid=5022 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:06.162000 audit[5022]: SYSCALL arch=c000003e syscall=46 success=yes exit=21524 a0=3 a1=7fff8e9d62c0 a2=0 a3=7fff8e9d62ac items=0 ppid=4647 pid=5022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:06.162000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:06.168424 kubelet[2211]: E0515 00:59:06.168386 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:06.171093 systemd-networkd[1089]: cali8b04d6857ed: Gained IPv6LL May 15 00:59:06.173096 env[1307]: time="2025-05-15T00:59:06.173033331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:59:06.173096 env[1307]: time="2025-05-15T00:59:06.173070129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:59:06.173096 env[1307]: time="2025-05-15T00:59:06.173080078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:59:06.173305 env[1307]: time="2025-05-15T00:59:06.173246858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d pid=5033 runtime=io.containerd.runc.v2 May 15 00:59:06.192925 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:59:06.216826 env[1307]: time="2025-05-15T00:59:06.216777445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c78b9db48-dl2b8,Uid:5098666b-a231-44ec-9bf5-415e006ee772,Namespace:calico-system,Attempt:1,} returns sandbox id \"75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d\"" May 15 00:59:06.555145 systemd-networkd[1089]: vxlan.calico: Gained IPv6LL May 15 00:59:06.974552 env[1307]: time="2025-05-15T00:59:06.974516781Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:59:06.974773 kubelet[2211]: E0515 00:59:06.974734 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:06.975150 env[1307]: time="2025-05-15T00:59:06.974939909Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.025 [INFO][5110] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.025 [INFO][5110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" iface="eth0" netns="/var/run/netns/cni-d3fcbce4-1163-47af-64eb-06255bb46d2c" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.025 [INFO][5110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" iface="eth0" netns="/var/run/netns/cni-d3fcbce4-1163-47af-64eb-06255bb46d2c" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.025 [INFO][5110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" iface="eth0" netns="/var/run/netns/cni-d3fcbce4-1163-47af-64eb-06255bb46d2c" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.025 [INFO][5110] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.025 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.050 [INFO][5127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.050 [INFO][5127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.050 [INFO][5127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.055 [WARNING][5127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.056 [INFO][5127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.057 [INFO][5127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:07.060515 env[1307]: 2025-05-15 00:59:07.058 [INFO][5110] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:07.063587 env[1307]: time="2025-05-15T00:59:07.061386246Z" level=info msg="TearDown network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" successfully" May 15 00:59:07.063587 env[1307]: time="2025-05-15T00:59:07.061416272Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" returns successfully" May 15 00:59:07.063587 env[1307]: time="2025-05-15T00:59:07.061981717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk5fw,Uid:234fff70-d82a-4012-9e49-d23446deada6,Namespace:calico-system,Attempt:1,}" May 15 00:59:07.062982 systemd[1]: run-netns-cni\x2dd3fcbce4\x2d1163\x2d47af\x2d64eb\x2d06255bb46d2c.mount: Deactivated successfully. May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.023 [INFO][5109] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.023 [INFO][5109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" iface="eth0" netns="/var/run/netns/cni-fd7a8d94-598f-70c6-71a3-a860975788a8" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.023 [INFO][5109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" iface="eth0" netns="/var/run/netns/cni-fd7a8d94-598f-70c6-71a3-a860975788a8" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.023 [INFO][5109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" iface="eth0" netns="/var/run/netns/cni-fd7a8d94-598f-70c6-71a3-a860975788a8" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.023 [INFO][5109] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.023 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.051 [INFO][5125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.051 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.057 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.066 [WARNING][5125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.067 [INFO][5125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.068 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:07.071759 env[1307]: 2025-05-15 00:59:07.070 [INFO][5109] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:07.072454 env[1307]: time="2025-05-15T00:59:07.071873035Z" level=info msg="TearDown network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" successfully" May 15 00:59:07.072454 env[1307]: time="2025-05-15T00:59:07.071898753Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" returns successfully" May 15 00:59:07.073835 systemd[1]: run-netns-cni\x2dfd7a8d94\x2d598f\x2d70c6\x2d71a3\x2da860975788a8.mount: Deactivated successfully. May 15 00:59:07.080056 kubelet[2211]: E0515 00:59:07.079917 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:07.083576 env[1307]: time="2025-05-15T00:59:07.083544274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2cxpm,Uid:a8b9021c-44c0-4a1b-b21d-74304d9a9ec9,Namespace:kube-system,Attempt:1,}" May 15 00:59:07.131132 systemd-networkd[1089]: cali1260027e89b: Gained IPv6LL May 15 00:59:07.172865 kubelet[2211]: E0515 00:59:07.172746 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:07.190107 systemd-networkd[1089]: calif3ccb0317de: Link UP May 15 00:59:07.193182 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:59:07.193246 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif3ccb0317de: link becomes ready May 15 00:59:07.193872 systemd-networkd[1089]: calif3ccb0317de: Gained carrier May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.112 [INFO][5144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pk5fw-eth0 csi-node-driver- calico-system 234fff70-d82a-4012-9e49-d23446deada6 1166 0 2025-05-15 00:58:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pk5fw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif3ccb0317de [] []}} ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.112 [INFO][5144] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.155 [INFO][5171] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" HandleID="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.161 [INFO][5171] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" HandleID="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000272480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pk5fw", "timestamp":"2025-05-15 00:59:07.155193876 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.162 [INFO][5171] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.162 [INFO][5171] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.162 [INFO][5171] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.163 [INFO][5171] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.165 [INFO][5171] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.168 [INFO][5171] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.170 [INFO][5171] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.171 [INFO][5171] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.172 [INFO][5171] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.177 [INFO][5171] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.181 [INFO][5171] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.186 [INFO][5171] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.186 [INFO][5171] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" host="localhost" May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.186 [INFO][5171] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:07.205398 env[1307]: 2025-05-15 00:59:07.186 [INFO][5171] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" HandleID="k8s-pod-network.d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.206578 env[1307]: 2025-05-15 00:59:07.188 [INFO][5144] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pk5fw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"234fff70-d82a-4012-9e49-d23446deada6", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pk5fw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3ccb0317de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:07.206578 env[1307]: 2025-05-15 00:59:07.188 [INFO][5144] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.206578 env[1307]: 2025-05-15 00:59:07.188 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3ccb0317de ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.206578 env[1307]: 2025-05-15 00:59:07.193 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.206578 env[1307]: 2025-05-15 00:59:07.194 [INFO][5144] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pk5fw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"234fff70-d82a-4012-9e49-d23446deada6", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f", Pod:"csi-node-driver-pk5fw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3ccb0317de", MAC:"9e:76:4b:5b:d2:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:07.206578 env[1307]: 2025-05-15 00:59:07.202 [INFO][5144] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f" Namespace="calico-system" Pod="csi-node-driver-pk5fw" WorkloadEndpoint="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:07.215000 audit[5200]: NETFILTER_CFG table=filter:107 family=2 entries=42 op=nft_register_chain pid=5200 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:07.215000 audit[5200]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffcabbe1690 a2=0 a3=7ffcabbe167c items=0 ppid=4647 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:07.215000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:07.229912 env[1307]: time="2025-05-15T00:59:07.229781269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:59:07.232170 systemd-networkd[1089]: cali83ab2f3d88c: Link UP May 15 00:59:07.232821 env[1307]: time="2025-05-15T00:59:07.229853084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:59:07.232821 env[1307]: time="2025-05-15T00:59:07.229866318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:59:07.232821 env[1307]: time="2025-05-15T00:59:07.232413572Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f pid=5215 runtime=io.containerd.runc.v2 May 15 00:59:07.233709 systemd-networkd[1089]: cali83ab2f3d88c: Gained carrier May 15 00:59:07.234240 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali83ab2f3d88c: link becomes ready May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.151 [INFO][5161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0 coredns-7db6d8ff4d- kube-system a8b9021c-44c0-4a1b-b21d-74304d9a9ec9 1165 0 2025-05-15 00:57:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-2cxpm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali83ab2f3d88c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.151 [INFO][5161] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.193 [INFO][5182] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" HandleID="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.202 [INFO][5182] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" HandleID="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e0130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-2cxpm", "timestamp":"2025-05-15 00:59:07.193415779 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.202 [INFO][5182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.202 [INFO][5182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.202 [INFO][5182] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.204 [INFO][5182] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.207 [INFO][5182] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.212 [INFO][5182] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.213 [INFO][5182] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.215 [INFO][5182] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.215 [INFO][5182] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.217 [INFO][5182] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98 May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.222 [INFO][5182] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.228 [INFO][5182] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.228 [INFO][5182] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" host="localhost" May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.228 [INFO][5182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:07.248091 env[1307]: 2025-05-15 00:59:07.228 [INFO][5182] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" HandleID="k8s-pod-network.d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.248663 env[1307]: 2025-05-15 00:59:07.230 [INFO][5161] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-2cxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83ab2f3d88c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:07.248663 env[1307]: 2025-05-15 00:59:07.231 [INFO][5161] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.248663 env[1307]: 2025-05-15 00:59:07.231 [INFO][5161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83ab2f3d88c ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.248663 env[1307]: 2025-05-15 00:59:07.233 [INFO][5161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.248663 env[1307]: 2025-05-15 00:59:07.233 [INFO][5161] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98", Pod:"coredns-7db6d8ff4d-2cxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83ab2f3d88c", MAC:"c2:e8:8e:01:5d:be", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:07.248663 env[1307]: 2025-05-15 00:59:07.242 [INFO][5161] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98" Namespace="kube-system" Pod="coredns-7db6d8ff4d-2cxpm" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:07.257000 audit[5250]: NETFILTER_CFG table=filter:108 family=2 entries=48 op=nft_register_chain pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:07.257000 audit[5250]: SYSCALL arch=c000003e syscall=46 success=yes exit=23448 a0=3 a1=7fff05c4f460 a2=0 a3=7fff05c4f44c items=0 ppid=4647 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:07.257000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:07.267450 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:59:07.278142 env[1307]: time="2025-05-15T00:59:07.278089225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pk5fw,Uid:234fff70-d82a-4012-9e49-d23446deada6,Namespace:calico-system,Attempt:1,} returns sandbox id \"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f\"" May 15 00:59:07.469326 env[1307]: time="2025-05-15T00:59:07.469259227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:59:07.469484 env[1307]: time="2025-05-15T00:59:07.469301366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:59:07.469484 env[1307]: time="2025-05-15T00:59:07.469311024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:59:07.470074 env[1307]: time="2025-05-15T00:59:07.470032681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98 pid=5272 runtime=io.containerd.runc.v2 May 15 00:59:07.497443 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:59:07.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.134:22-10.0.0.1:48006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:07.508626 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:48006.service. May 15 00:59:07.514174 kernel: kauditd_printk_skb: 540 callbacks suppressed May 15 00:59:07.514294 kernel: audit: type=1130 audit(1747270747.507:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.134:22-10.0.0.1:48006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:07.531547 env[1307]: time="2025-05-15T00:59:07.531504270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-2cxpm,Uid:a8b9021c-44c0-4a1b-b21d-74304d9a9ec9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98\"" May 15 00:59:07.533919 kubelet[2211]: E0515 00:59:07.533892 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:07.535566 env[1307]: time="2025-05-15T00:59:07.535538378Z" level=info msg="CreateContainer within sandbox \"d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 00:59:07.545000 audit[5300]: USER_ACCT pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.546659 sshd[5300]: Accepted publickey for core from 10.0.0.1 port 48006 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:07.549000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.551069 sshd[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:07.557052 kernel: audit: type=1101 audit(1747270747.545:529): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.557161 kernel: audit: type=1103 audit(1747270747.549:530): pid=5300 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.557180 kernel: audit: type=1006 audit(1747270747.549:531): pid=5300 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 May 15 00:59:07.555473 systemd[1]: Started session-22.scope. May 15 00:59:07.556190 systemd-logind[1293]: New session 22 of user core. May 15 00:59:07.561877 kernel: audit: type=1300 audit(1747270747.549:531): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc76e47230 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:07.549000 audit[5300]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc76e47230 a2=3 a3=0 items=0 ppid=1 pid=5300 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:07.549000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:07.560000 audit[5300]: USER_START pid=5300 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.568051 kernel: audit: type=1327 audit(1747270747.549:531): proctitle=737368643A20636F7265205B707269765D May 15 00:59:07.568096 kernel: audit: type=1105 audit(1747270747.560:532): pid=5300 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.561000 audit[5309]: CRED_ACQ pid=5309 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.571885 kernel: audit: type=1103 audit(1747270747.561:533): pid=5309 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.577283 env[1307]: time="2025-05-15T00:59:07.577233353Z" level=info msg="CreateContainer within sandbox \"d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cda329d8d1dde6fcd82b03ccf844c3001caeba3951b684395dac1ad8cd4ccffc\"" May 15 00:59:07.579028 env[1307]: time="2025-05-15T00:59:07.579004417Z" level=info msg="StartContainer for \"cda329d8d1dde6fcd82b03ccf844c3001caeba3951b684395dac1ad8cd4ccffc\"" May 15 00:59:07.588156 env[1307]: time="2025-05-15T00:59:07.588095071Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:07.590811 env[1307]: time="2025-05-15T00:59:07.590775894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:07.592706 env[1307]: time="2025-05-15T00:59:07.592690056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:07.594211 env[1307]: time="2025-05-15T00:59:07.594193661Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:07.594617 env[1307]: time="2025-05-15T00:59:07.594596924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:b1960e792987d99ee8f3583d7354dcd25a683cf854e8f10322ca7eeb83128532\"" May 15 00:59:07.596155 env[1307]: time="2025-05-15T00:59:07.596138270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\"" May 15 00:59:07.597181 env[1307]: time="2025-05-15T00:59:07.597161831Z" level=info msg="CreateContainer within sandbox \"a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:59:07.704229 env[1307]: time="2025-05-15T00:59:07.704164793Z" level=info msg="StartContainer for \"cda329d8d1dde6fcd82b03ccf844c3001caeba3951b684395dac1ad8cd4ccffc\" returns successfully" May 15 00:59:07.713589 env[1307]: time="2025-05-15T00:59:07.713556800Z" level=info msg="CreateContainer within sandbox \"a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1fb8730e24e100f49cf78754a7356e18f3841625e65450d9485ac1bb264d27c5\"" May 15 00:59:07.714185 env[1307]: time="2025-05-15T00:59:07.714143293Z" level=info msg="StartContainer for \"1fb8730e24e100f49cf78754a7356e18f3841625e65450d9485ac1bb264d27c5\"" May 15 00:59:07.726436 sshd[5300]: pam_unix(sshd:session): session closed for user core May 15 00:59:07.736088 kernel: audit: type=1106 audit(1747270747.726:534): pid=5300 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.736191 kernel: audit: type=1104 audit(1747270747.726:535): pid=5300 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.726000 audit[5300]: USER_END pid=5300 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.726000 audit[5300]: CRED_DISP pid=5300 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:07.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.134:22-10.0.0.1:48006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:07.728823 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:48006.service: Deactivated successfully. May 15 00:59:07.729538 systemd[1]: session-22.scope: Deactivated successfully. May 15 00:59:07.730181 systemd-logind[1293]: Session 22 logged out. Waiting for processes to exit. May 15 00:59:07.730863 systemd-logind[1293]: Removed session 22. May 15 00:59:07.772419 env[1307]: time="2025-05-15T00:59:07.772292884Z" level=info msg="StartContainer for \"1fb8730e24e100f49cf78754a7356e18f3841625e65450d9485ac1bb264d27c5\" returns successfully" May 15 00:59:08.155308 systemd-networkd[1089]: calied4c1f0edea: Gained IPv6LL May 15 00:59:08.175671 kubelet[2211]: E0515 00:59:08.175632 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:08.196982 kubelet[2211]: I0515 00:59:08.196104 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-2cxpm" podStartSLOduration=74.196085189 podStartE2EDuration="1m14.196085189s" podCreationTimestamp="2025-05-15 00:57:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:59:08.186965211 +0000 UTC m=+90.309804292" watchObservedRunningTime="2025-05-15 00:59:08.196085189 +0000 UTC m=+90.318924250" May 15 00:59:08.198000 audit[5394]: NETFILTER_CFG table=filter:109 family=2 entries=10 op=nft_register_rule pid=5394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:08.198000 audit[5394]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe20d258f0 a2=0 a3=7ffe20d258dc items=0 ppid=2416 pid=5394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:08.198000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:08.205000 audit[5394]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=5394 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:08.205000 audit[5394]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe20d258f0 a2=0 a3=7ffe20d258dc items=0 ppid=2416 pid=5394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:08.205000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:08.213642 kubelet[2211]: I0515 00:59:08.213576 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67fbb64cb9-vxtzh" podStartSLOduration=64.873790505 podStartE2EDuration="1m7.213556778s" podCreationTimestamp="2025-05-15 00:58:01 +0000 UTC" firstStartedPulling="2025-05-15 00:59:05.255849803 +0000 UTC m=+87.378688874" lastFinishedPulling="2025-05-15 00:59:07.595616066 +0000 UTC m=+89.718455147" observedRunningTime="2025-05-15 00:59:08.196674972 +0000 UTC m=+90.319514043" watchObservedRunningTime="2025-05-15 00:59:08.213556778 +0000 UTC m=+90.336395869" May 15 00:59:08.220000 audit[5396]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:08.220000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc1e330ef0 a2=0 a3=7ffc1e330edc items=0 ppid=2416 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:08.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:08.227000 audit[5396]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=5396 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:08.227000 audit[5396]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffc1e330ef0 a2=0 a3=7ffc1e330edc items=0 ppid=2416 pid=5396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:08.227000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:08.539152 systemd-networkd[1089]: calif3ccb0317de: Gained IPv6LL May 15 00:59:08.974086 env[1307]: time="2025-05-15T00:59:08.974004777Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.054 [INFO][5413] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.055 [INFO][5413] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" iface="eth0" netns="/var/run/netns/cni-7b89cbbc-2761-bbb4-33a6-20aab63b8c26" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.055 [INFO][5413] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" iface="eth0" netns="/var/run/netns/cni-7b89cbbc-2761-bbb4-33a6-20aab63b8c26" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.055 [INFO][5413] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" iface="eth0" netns="/var/run/netns/cni-7b89cbbc-2761-bbb4-33a6-20aab63b8c26" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.055 [INFO][5413] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.055 [INFO][5413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.075 [INFO][5421] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.075 [INFO][5421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.075 [INFO][5421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.080 [WARNING][5421] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.080 [INFO][5421] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.081 [INFO][5421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:09.084166 env[1307]: 2025-05-15 00:59:09.082 [INFO][5413] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:09.084657 env[1307]: time="2025-05-15T00:59:09.084303373Z" level=info msg="TearDown network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" successfully" May 15 00:59:09.084657 env[1307]: time="2025-05-15T00:59:09.084336595Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" returns successfully" May 15 00:59:09.084928 env[1307]: time="2025-05-15T00:59:09.084897695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-tnhq7,Uid:88d3eb5f-c3af-435c-afdd-38692e59dcc7,Namespace:calico-apiserver,Attempt:1,}" May 15 00:59:09.086743 systemd[1]: run-netns-cni\x2d7b89cbbc\x2d2761\x2dbbb4\x2d33a6\x2d20aab63b8c26.mount: Deactivated successfully. May 15 00:59:09.184480 kubelet[2211]: E0515 00:59:09.184192 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:09.196923 systemd-networkd[1089]: cali1590e92a9e2: Link UP May 15 00:59:09.199766 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 00:59:09.199824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1590e92a9e2: link becomes ready May 15 00:59:09.200060 systemd-networkd[1089]: cali1590e92a9e2: Gained carrier May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.131 [INFO][5428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0 calico-apiserver-67fbb64cb9- calico-apiserver 88d3eb5f-c3af-435c-afdd-38692e59dcc7 1207 0 2025-05-15 00:58:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67fbb64cb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67fbb64cb9-tnhq7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1590e92a9e2 [] []}} ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.131 [INFO][5428] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.159 [INFO][5444] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" HandleID="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.166 [INFO][5444] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" HandleID="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036b430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67fbb64cb9-tnhq7", "timestamp":"2025-05-15 00:59:09.159343246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.166 [INFO][5444] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.166 [INFO][5444] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.166 [INFO][5444] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.167 [INFO][5444] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.170 [INFO][5444] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.173 [INFO][5444] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.174 [INFO][5444] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.179 [INFO][5444] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.179 [INFO][5444] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.180 [INFO][5444] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.185 [INFO][5444] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.192 [INFO][5444] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.192 [INFO][5444] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" host="localhost" May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.192 [INFO][5444] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:09.209794 env[1307]: 2025-05-15 00:59:09.192 [INFO][5444] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" HandleID="k8s-pod-network.d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.210371 env[1307]: 2025-05-15 00:59:09.194 [INFO][5428] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88d3eb5f-c3af-435c-afdd-38692e59dcc7", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67fbb64cb9-tnhq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1590e92a9e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:09.210371 env[1307]: 2025-05-15 00:59:09.194 [INFO][5428] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.210371 env[1307]: 2025-05-15 00:59:09.194 [INFO][5428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1590e92a9e2 ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.210371 env[1307]: 2025-05-15 00:59:09.199 [INFO][5428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.210371 env[1307]: 2025-05-15 00:59:09.200 [INFO][5428] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88d3eb5f-c3af-435c-afdd-38692e59dcc7", ResourceVersion:"1207", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d", Pod:"calico-apiserver-67fbb64cb9-tnhq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1590e92a9e2", MAC:"c6:7c:a5:86:7d:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:09.210371 env[1307]: 2025-05-15 00:59:09.207 [INFO][5428] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d" Namespace="calico-apiserver" Pod="calico-apiserver-67fbb64cb9-tnhq7" WorkloadEndpoint="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:09.220000 audit[5466]: NETFILTER_CFG table=filter:113 family=2 entries=52 op=nft_register_chain pid=5466 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 15 00:59:09.220000 audit[5466]: SYSCALL arch=c000003e syscall=46 success=yes exit=26728 a0=3 a1=7ffdca1c2b90 a2=0 a3=7ffdca1c2b7c items=0 ppid=4647 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:09.220000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 15 00:59:09.226511 env[1307]: time="2025-05-15T00:59:09.226411252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 00:59:09.226511 env[1307]: time="2025-05-15T00:59:09.226477367Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 00:59:09.226511 env[1307]: time="2025-05-15T00:59:09.226497715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 00:59:09.226721 env[1307]: time="2025-05-15T00:59:09.226677431Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d pid=5474 runtime=io.containerd.runc.v2 May 15 00:59:09.239000 audit[5493]: NETFILTER_CFG table=filter:114 family=2 entries=9 op=nft_register_rule pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:09.239000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe53037c90 a2=0 a3=7ffe53037c7c items=0 ppid=2416 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:09.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:09.250000 audit[5493]: NETFILTER_CFG table=nat:115 family=2 entries=63 op=nft_register_chain pid=5493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:09.250000 audit[5493]: SYSCALL arch=c000003e syscall=46 success=yes exit=23436 a0=3 a1=7ffe53037c90 a2=0 a3=7ffe53037c7c items=0 ppid=2416 pid=5493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:09.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:09.254202 systemd-resolved[1223]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 00:59:09.281896 env[1307]: time="2025-05-15T00:59:09.281852620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67fbb64cb9-tnhq7,Uid:88d3eb5f-c3af-435c-afdd-38692e59dcc7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d\"" May 15 00:59:09.284393 env[1307]: time="2025-05-15T00:59:09.284361227Z" level=info msg="CreateContainer within sandbox \"d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 15 00:59:09.295820 env[1307]: time="2025-05-15T00:59:09.295788032Z" level=info msg="CreateContainer within sandbox \"d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c35e2cda6aa8cc84e748bc6f9c620afd5df08f5be7af169b25c7ad30bf575af6\"" May 15 00:59:09.297547 env[1307]: time="2025-05-15T00:59:09.297517750Z" level=info msg="StartContainer for \"c35e2cda6aa8cc84e748bc6f9c620afd5df08f5be7af169b25c7ad30bf575af6\"" May 15 00:59:09.308115 systemd-networkd[1089]: cali83ab2f3d88c: Gained IPv6LL May 15 00:59:09.352983 env[1307]: time="2025-05-15T00:59:09.352067398Z" level=info msg="StartContainer for \"c35e2cda6aa8cc84e748bc6f9c620afd5df08f5be7af169b25c7ad30bf575af6\" returns successfully" May 15 00:59:09.996947 env[1307]: time="2025-05-15T00:59:09.996896221Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:09.999363 env[1307]: time="2025-05-15T00:59:09.999322193Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:10.001123 env[1307]: time="2025-05-15T00:59:10.001085565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:10.002583 env[1307]: time="2025-05-15T00:59:10.002538370Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:feaab0197035d474845e0f8137a99a78cab274f0a3cac4d5485cf9b1bdf9ffa9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:10.003027 env[1307]: time="2025-05-15T00:59:10.003002630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.3\" returns image reference \"sha256:4e982138231b3653a012db4f21ed5e7be69afd5f553dba38cf7e88f0ed740b94\"" May 15 00:59:10.004100 env[1307]: time="2025-05-15T00:59:10.004067437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 15 00:59:10.009604 env[1307]: time="2025-05-15T00:59:10.009556803Z" level=info msg="CreateContainer within sandbox \"75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 15 00:59:10.025075 env[1307]: time="2025-05-15T00:59:10.025024190Z" level=info msg="CreateContainer within sandbox \"75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f362435b8fa442d50ced1976aee2ed94f92ef55165295adb98405014048eb59c\"" May 15 00:59:10.025612 env[1307]: time="2025-05-15T00:59:10.025563712Z" level=info msg="StartContainer for \"f362435b8fa442d50ced1976aee2ed94f92ef55165295adb98405014048eb59c\"" May 15 00:59:10.082193 env[1307]: time="2025-05-15T00:59:10.082149452Z" level=info msg="StartContainer for \"f362435b8fa442d50ced1976aee2ed94f92ef55165295adb98405014048eb59c\" returns successfully" May 15 00:59:10.189412 kubelet[2211]: E0515 00:59:10.189368 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:10.206910 kubelet[2211]: I0515 00:59:10.206839 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67fbb64cb9-tnhq7" podStartSLOduration=69.206816712 podStartE2EDuration="1m9.206816712s" podCreationTimestamp="2025-05-15 00:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 00:59:10.1976499 +0000 UTC m=+92.320488971" watchObservedRunningTime="2025-05-15 00:59:10.206816712 +0000 UTC m=+92.329655773" May 15 00:59:10.208000 audit[5604]: NETFILTER_CFG table=filter:116 family=2 entries=8 op=nft_register_rule pid=5604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:10.208000 audit[5604]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff4c1dbec0 a2=0 a3=7fff4c1dbeac items=0 ppid=2416 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:10.208000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:10.212000 audit[5604]: NETFILTER_CFG table=nat:117 family=2 entries=30 op=nft_register_rule pid=5604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:10.212000 audit[5604]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff4c1dbec0 a2=0 a3=7fff4c1dbeac items=0 ppid=2416 pid=5604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:10.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:10.243326 kubelet[2211]: I0515 00:59:10.243271 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c78b9db48-dl2b8" podStartSLOduration=65.462959025 podStartE2EDuration="1m9.243253034s" podCreationTimestamp="2025-05-15 00:58:01 +0000 UTC" firstStartedPulling="2025-05-15 00:59:06.223514584 +0000 UTC m=+88.346353655" lastFinishedPulling="2025-05-15 00:59:10.003808573 +0000 UTC m=+92.126647664" observedRunningTime="2025-05-15 00:59:10.207357356 +0000 UTC m=+92.330196427" watchObservedRunningTime="2025-05-15 00:59:10.243253034 +0000 UTC m=+92.366092106" May 15 00:59:10.267168 systemd-networkd[1089]: cali1590e92a9e2: Gained IPv6LL May 15 00:59:11.223000 audit[5618]: NETFILTER_CFG table=filter:118 family=2 entries=8 op=nft_register_rule pid=5618 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:11.223000 audit[5618]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcda688550 a2=0 a3=7ffcda68853c items=0 ppid=2416 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:11.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:11.228000 audit[5618]: NETFILTER_CFG table=nat:119 family=2 entries=34 op=nft_register_chain pid=5618 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:11.228000 audit[5618]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffcda688550 a2=0 a3=7ffcda68853c items=0 ppid=2416 pid=5618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:11.228000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:11.287344 env[1307]: time="2025-05-15T00:59:11.287282096Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:11.289559 env[1307]: time="2025-05-15T00:59:11.289511934Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:11.291318 env[1307]: time="2025-05-15T00:59:11.291296345Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:11.292915 env[1307]: time="2025-05-15T00:59:11.292874840Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:11.293292 env[1307]: time="2025-05-15T00:59:11.293259212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" May 15 00:59:11.295075 env[1307]: time="2025-05-15T00:59:11.295046579Z" level=info msg="CreateContainer within sandbox \"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 15 00:59:11.308974 env[1307]: time="2025-05-15T00:59:11.308931479Z" level=info msg="CreateContainer within sandbox \"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4c2e5c469a0144a00c95e7365137d107ccbc4b38f916fefa36ad05ecc475371a\"" May 15 00:59:11.309358 env[1307]: time="2025-05-15T00:59:11.309327263Z" level=info msg="StartContainer for \"4c2e5c469a0144a00c95e7365137d107ccbc4b38f916fefa36ad05ecc475371a\"" May 15 00:59:11.397304 env[1307]: time="2025-05-15T00:59:11.397259567Z" level=info msg="StartContainer for \"4c2e5c469a0144a00c95e7365137d107ccbc4b38f916fefa36ad05ecc475371a\" returns successfully" May 15 00:59:11.399478 env[1307]: time="2025-05-15T00:59:11.399448509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 15 00:59:12.729791 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:48020.service. May 15 00:59:12.734980 kernel: kauditd_printk_skb: 34 callbacks suppressed May 15 00:59:12.735094 kernel: audit: type=1130 audit(1747270752.728:548): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.134:22-10.0.0.1:48020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:12.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.134:22-10.0.0.1:48020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:12.765000 audit[5653]: USER_ACCT pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.767002 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 48020 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:12.769434 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:12.768000 audit[5653]: CRED_ACQ pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.772933 systemd-logind[1293]: New session 23 of user core. May 15 00:59:12.773884 systemd[1]: Started session-23.scope. May 15 00:59:12.775043 kernel: audit: type=1101 audit(1747270752.765:549): pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.775105 kernel: audit: type=1103 audit(1747270752.768:550): pid=5653 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.777744 kernel: audit: type=1006 audit(1747270752.768:551): pid=5653 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 15 00:59:12.768000 audit[5653]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbda9bf80 a2=3 a3=0 items=0 ppid=1 pid=5653 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:12.781799 kernel: audit: type=1300 audit(1747270752.768:551): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbda9bf80 a2=3 a3=0 items=0 ppid=1 pid=5653 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:12.781862 kernel: audit: type=1327 audit(1747270752.768:551): proctitle=737368643A20636F7265205B707269765D May 15 00:59:12.768000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:12.783101 kernel: audit: type=1105 audit(1747270752.777:552): pid=5653 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.777000 audit[5653]: USER_START pid=5653 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.787364 kernel: audit: type=1103 audit(1747270752.779:553): pid=5656 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.779000 audit[5656]: CRED_ACQ pid=5656 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.924071 sshd[5653]: pam_unix(sshd:session): session closed for user core May 15 00:59:12.924000 audit[5653]: USER_END pid=5653 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.927030 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:48020.service: Deactivated successfully. May 15 00:59:12.928140 systemd[1]: session-23.scope: Deactivated successfully. May 15 00:59:12.928793 systemd-logind[1293]: Session 23 logged out. Waiting for processes to exit. May 15 00:59:12.932981 kernel: audit: type=1106 audit(1747270752.924:554): pid=5653 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.933048 kernel: audit: type=1104 audit(1747270752.924:555): pid=5653 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.924000 audit[5653]: CRED_DISP pid=5653 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:12.929662 systemd-logind[1293]: Removed session 23. May 15 00:59:12.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.134:22-10.0.0.1:48020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:12.974128 kubelet[2211]: E0515 00:59:12.974076 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:13.185152 env[1307]: time="2025-05-15T00:59:13.185101813Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:13.187052 env[1307]: time="2025-05-15T00:59:13.187023172Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:13.188657 env[1307]: time="2025-05-15T00:59:13.188619410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:13.190132 env[1307]: time="2025-05-15T00:59:13.190099969Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 00:59:13.190558 env[1307]: time="2025-05-15T00:59:13.190536612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" May 15 00:59:13.192529 env[1307]: time="2025-05-15T00:59:13.192477989Z" level=info msg="CreateContainer within sandbox \"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 15 00:59:13.204325 env[1307]: time="2025-05-15T00:59:13.204220501Z" level=info msg="CreateContainer within sandbox \"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e8645315f8a2ea1092b55394861d676b88e5951ac72855935f4819ba2f93b9b0\"" May 15 00:59:13.204728 env[1307]: time="2025-05-15T00:59:13.204700004Z" level=info msg="StartContainer for \"e8645315f8a2ea1092b55394861d676b88e5951ac72855935f4819ba2f93b9b0\"" May 15 00:59:13.247265 env[1307]: time="2025-05-15T00:59:13.247216600Z" level=info msg="StartContainer for \"e8645315f8a2ea1092b55394861d676b88e5951ac72855935f4819ba2f93b9b0\" returns successfully" May 15 00:59:14.072419 kubelet[2211]: I0515 00:59:14.072374 2211 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 15 00:59:14.072419 kubelet[2211]: I0515 00:59:14.072412 2211 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 15 00:59:14.208581 kubelet[2211]: I0515 00:59:14.207382 2211 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pk5fw" podStartSLOduration=67.295294853 podStartE2EDuration="1m13.207366762s" podCreationTimestamp="2025-05-15 00:58:01 +0000 UTC" firstStartedPulling="2025-05-15 00:59:07.279199225 +0000 UTC m=+89.402038287" lastFinishedPulling="2025-05-15 00:59:13.191271125 +0000 UTC m=+95.314110196" observedRunningTime="2025-05-15 00:59:14.207181763 +0000 UTC m=+96.330020834" watchObservedRunningTime="2025-05-15 00:59:14.207366762 +0000 UTC m=+96.330205833" May 15 00:59:17.927142 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:59586.service. May 15 00:59:17.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.134:22-10.0.0.1:59586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:17.928291 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:59:17.928391 kernel: audit: type=1130 audit(1747270757.926:557): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.134:22-10.0.0.1:59586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:17.962000 audit[5732]: USER_ACCT pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:17.964081 sshd[5732]: Accepted publickey for core from 10.0.0.1 port 59586 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:17.966692 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:17.965000 audit[5732]: CRED_ACQ pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:17.970300 systemd-logind[1293]: New session 24 of user core. May 15 00:59:17.971024 systemd[1]: Started session-24.scope. May 15 00:59:17.971432 kernel: audit: type=1101 audit(1747270757.962:558): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:17.971554 kernel: audit: type=1103 audit(1747270757.965:559): pid=5732 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:17.973917 kernel: audit: type=1006 audit(1747270757.965:560): pid=5732 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 15 00:59:17.975017 kubelet[2211]: E0515 00:59:17.974988 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:17.965000 audit[5732]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeee1e6630 a2=3 a3=0 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:17.965000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:17.981032 kernel: audit: type=1300 audit(1747270757.965:560): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeee1e6630 a2=3 a3=0 items=0 ppid=1 pid=5732 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:17.981081 kernel: audit: type=1327 audit(1747270757.965:560): proctitle=737368643A20636F7265205B707269765D May 15 00:59:17.981100 kernel: audit: type=1105 audit(1747270757.975:561): pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:17.975000 audit[5732]: USER_START pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:17.976000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:17.988582 kernel: audit: type=1103 audit(1747270757.976:562): pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:18.090237 sshd[5732]: pam_unix(sshd:session): session closed for user core May 15 00:59:18.090000 audit[5732]: USER_END pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:18.092624 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:59586.service: Deactivated successfully. May 15 00:59:18.093692 systemd[1]: session-24.scope: Deactivated successfully. May 15 00:59:18.094029 systemd-logind[1293]: Session 24 logged out. Waiting for processes to exit. May 15 00:59:18.094776 systemd-logind[1293]: Removed session 24. May 15 00:59:18.090000 audit[5732]: CRED_DISP pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:18.098967 kernel: audit: type=1106 audit(1747270758.090:563): pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:18.099023 kernel: audit: type=1104 audit(1747270758.090:564): pid=5732 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:18.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.134:22-10.0.0.1:59586 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:23.093080 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:59590.service. May 15 00:59:23.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.134:22-10.0.0.1:59590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:23.094235 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:59:23.094298 kernel: audit: type=1130 audit(1747270763.092:566): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.134:22-10.0.0.1:59590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:23.125000 audit[5748]: USER_ACCT pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.126239 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 59590 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:23.128478 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:23.127000 audit[5748]: CRED_ACQ pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.131827 systemd-logind[1293]: New session 25 of user core. May 15 00:59:23.132620 systemd[1]: Started session-25.scope. May 15 00:59:23.133425 kernel: audit: type=1101 audit(1747270763.125:567): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.133483 kernel: audit: type=1103 audit(1747270763.127:568): pid=5748 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.133516 kernel: audit: type=1006 audit(1747270763.127:569): pid=5748 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 15 00:59:23.127000 audit[5748]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe88f71000 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:23.139768 kernel: audit: type=1300 audit(1747270763.127:569): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe88f71000 a2=3 a3=0 items=0 ppid=1 pid=5748 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:23.139828 kernel: audit: type=1327 audit(1747270763.127:569): proctitle=737368643A20636F7265205B707269765D May 15 00:59:23.127000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:23.136000 audit[5748]: USER_START pid=5748 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.145235 kernel: audit: type=1105 audit(1747270763.136:570): pid=5748 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.145273 kernel: audit: type=1103 audit(1747270763.137:571): pid=5751 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.137000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.236167 sshd[5748]: pam_unix(sshd:session): session closed for user core May 15 00:59:23.236000 audit[5748]: USER_END pid=5748 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.238447 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:59596.service. May 15 00:59:23.238839 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:59590.service: Deactivated successfully. May 15 00:59:23.239749 systemd[1]: session-25.scope: Deactivated successfully. May 15 00:59:23.236000 audit[5748]: CRED_DISP pid=5748 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.245474 kernel: audit: type=1106 audit(1747270763.236:572): pid=5748 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.245548 kernel: audit: type=1104 audit(1747270763.236:573): pid=5748 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.134:22-10.0.0.1:59596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:23.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.134:22-10.0.0.1:59590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:23.246308 systemd-logind[1293]: Session 25 logged out. Waiting for processes to exit. May 15 00:59:23.247208 systemd-logind[1293]: Removed session 25. May 15 00:59:23.271000 audit[5761]: USER_ACCT pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.272490 sshd[5761]: Accepted publickey for core from 10.0.0.1 port 59596 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:23.272000 audit[5761]: CRED_ACQ pid=5761 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.272000 audit[5761]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0502d270 a2=3 a3=0 items=0 ppid=1 pid=5761 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:23.272000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:23.273314 sshd[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:23.276220 systemd-logind[1293]: New session 26 of user core. May 15 00:59:23.276875 systemd[1]: Started session-26.scope. May 15 00:59:23.279000 audit[5761]: USER_START pid=5761 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.280000 audit[5766]: CRED_ACQ pid=5766 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.451727 sshd[5761]: pam_unix(sshd:session): session closed for user core May 15 00:59:23.451000 audit[5761]: USER_END pid=5761 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.451000 audit[5761]: CRED_DISP pid=5761 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.134:22-10.0.0.1:59598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:23.453893 systemd[1]: Started sshd@26-10.0.0.134:22-10.0.0.1:59598.service. May 15 00:59:23.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.134:22-10.0.0.1:59596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:23.454320 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:59596.service: Deactivated successfully. May 15 00:59:23.455149 systemd-logind[1293]: Session 26 logged out. Waiting for processes to exit. May 15 00:59:23.455176 systemd[1]: session-26.scope: Deactivated successfully. May 15 00:59:23.455911 systemd-logind[1293]: Removed session 26. May 15 00:59:23.488000 audit[5773]: USER_ACCT pid=5773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.489407 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 59598 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:23.489000 audit[5773]: CRED_ACQ pid=5773 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.489000 audit[5773]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc59df2370 a2=3 a3=0 items=0 ppid=1 pid=5773 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:23.489000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:23.490266 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:23.493202 systemd-logind[1293]: New session 27 of user core. May 15 00:59:23.493913 systemd[1]: Started session-27.scope. May 15 00:59:23.496000 audit[5773]: USER_START pid=5773 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:23.497000 audit[5778]: CRED_ACQ pid=5778 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:24.973000 audit[5790]: NETFILTER_CFG table=filter:120 family=2 entries=20 op=nft_register_rule pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:24.973000 audit[5790]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffca91603c0 a2=0 a3=7ffca91603ac items=0 ppid=2416 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:24.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:24.978000 audit[5790]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=5790 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:24.978000 audit[5790]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffca91603c0 a2=0 a3=0 items=0 ppid=2416 pid=5790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:24.978000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:24.981997 sshd[5773]: pam_unix(sshd:session): session closed for user core May 15 00:59:24.982000 audit[5773]: USER_END pid=5773 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:24.982000 audit[5773]: CRED_DISP pid=5773 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:24.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.134:22-10.0.0.1:59608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:24.984414 systemd[1]: Started sshd@27-10.0.0.134:22-10.0.0.1:59608.service. May 15 00:59:24.985765 systemd[1]: sshd@26-10.0.0.134:22-10.0.0.1:59598.service: Deactivated successfully. May 15 00:59:24.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.134:22-10.0.0.1:59598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:24.986441 systemd[1]: session-27.scope: Deactivated successfully. May 15 00:59:24.987160 systemd-logind[1293]: Session 27 logged out. Waiting for processes to exit. May 15 00:59:24.988160 systemd-logind[1293]: Removed session 27. May 15 00:59:24.993000 audit[5795]: NETFILTER_CFG table=filter:122 family=2 entries=32 op=nft_register_rule pid=5795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:24.993000 audit[5795]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffae1c94f0 a2=0 a3=7fffae1c94dc items=0 ppid=2416 pid=5795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:24.993000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:24.999000 audit[5795]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:24.999000 audit[5795]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffae1c94f0 a2=0 a3=0 items=0 ppid=2416 pid=5795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:24.999000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:25.020000 audit[5791]: USER_ACCT pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.021512 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 59608 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:25.021000 audit[5791]: CRED_ACQ pid=5791 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.021000 audit[5791]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5d0f0740 a2=3 a3=0 items=0 ppid=1 pid=5791 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:25.021000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:25.022606 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:25.027640 systemd[1]: Started session-28.scope. May 15 00:59:25.028566 systemd-logind[1293]: New session 28 of user core. May 15 00:59:25.032000 audit[5791]: USER_START pid=5791 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.033000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.325000 audit[5791]: USER_END pid=5791 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.325000 audit[5791]: CRED_DISP pid=5791 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.327949 systemd[1]: Started sshd@28-10.0.0.134:22-10.0.0.1:59624.service. May 15 00:59:25.326166 sshd[5791]: pam_unix(sshd:session): session closed for user core May 15 00:59:25.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.134:22-10.0.0.1:59624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:25.329348 systemd[1]: sshd@27-10.0.0.134:22-10.0.0.1:59608.service: Deactivated successfully. May 15 00:59:25.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-10.0.0.134:22-10.0.0.1:59608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:25.330544 systemd[1]: session-28.scope: Deactivated successfully. May 15 00:59:25.330629 systemd-logind[1293]: Session 28 logged out. Waiting for processes to exit. May 15 00:59:25.331526 systemd-logind[1293]: Removed session 28. May 15 00:59:25.360000 audit[5812]: USER_ACCT pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.363225 sshd[5812]: Accepted publickey for core from 10.0.0.1 port 59624 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:25.361000 audit[5812]: CRED_ACQ pid=5812 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.362000 audit[5812]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcbf47cac0 a2=3 a3=0 items=0 ppid=1 pid=5812 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:25.362000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:25.364246 sshd[5812]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:25.368978 systemd[1]: Started session-29.scope. May 15 00:59:25.369379 systemd-logind[1293]: New session 29 of user core. May 15 00:59:25.372000 audit[5812]: USER_START pid=5812 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.373000 audit[5818]: CRED_ACQ pid=5818 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.489679 sshd[5812]: pam_unix(sshd:session): session closed for user core May 15 00:59:25.488000 audit[5812]: USER_END pid=5812 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.488000 audit[5812]: CRED_DISP pid=5812 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:25.492113 systemd[1]: sshd@28-10.0.0.134:22-10.0.0.1:59624.service: Deactivated successfully. May 15 00:59:25.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-10.0.0.134:22-10.0.0.1:59624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:25.493081 systemd[1]: session-29.scope: Deactivated successfully. May 15 00:59:25.493105 systemd-logind[1293]: Session 29 logged out. Waiting for processes to exit. May 15 00:59:25.494050 systemd-logind[1293]: Removed session 29. May 15 00:59:30.492473 systemd[1]: Started sshd@29-10.0.0.134:22-10.0.0.1:46472.service. May 15 00:59:30.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.134:22-10.0.0.1:46472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:30.493619 kernel: kauditd_printk_skb: 57 callbacks suppressed May 15 00:59:30.493760 kernel: audit: type=1130 audit(1747270770.491:615): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.134:22-10.0.0.1:46472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:30.526000 audit[5834]: USER_ACCT pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.528105 sshd[5834]: Accepted publickey for core from 10.0.0.1 port 46472 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:30.530815 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:30.529000 audit[5834]: CRED_ACQ pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.534562 systemd-logind[1293]: New session 30 of user core. May 15 00:59:30.535282 systemd[1]: Started session-30.scope. May 15 00:59:30.535389 kernel: audit: type=1101 audit(1747270770.526:616): pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.535421 kernel: audit: type=1103 audit(1747270770.529:617): pid=5834 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.535436 kernel: audit: type=1006 audit(1747270770.529:618): pid=5834 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 May 15 00:59:30.529000 audit[5834]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0a992bd0 a2=3 a3=0 items=0 ppid=1 pid=5834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:30.541494 kernel: audit: type=1300 audit(1747270770.529:618): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe0a992bd0 a2=3 a3=0 items=0 ppid=1 pid=5834 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:30.529000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:30.542976 kernel: audit: type=1327 audit(1747270770.529:618): proctitle=737368643A20636F7265205B707269765D May 15 00:59:30.543019 kernel: audit: type=1105 audit(1747270770.539:619): pid=5834 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.539000 audit[5834]: USER_START pid=5834 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.547135 kernel: audit: type=1103 audit(1747270770.540:620): pid=5837 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.540000 audit[5837]: CRED_ACQ pid=5837 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.641189 sshd[5834]: pam_unix(sshd:session): session closed for user core May 15 00:59:30.641000 audit[5834]: USER_END pid=5834 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.643586 systemd[1]: sshd@29-10.0.0.134:22-10.0.0.1:46472.service: Deactivated successfully. May 15 00:59:30.644327 systemd[1]: session-30.scope: Deactivated successfully. May 15 00:59:30.645095 systemd-logind[1293]: Session 30 logged out. Waiting for processes to exit. May 15 00:59:30.645861 systemd-logind[1293]: Removed session 30. May 15 00:59:30.641000 audit[5834]: CRED_DISP pid=5834 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.650687 kernel: audit: type=1106 audit(1747270770.641:621): pid=5834 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.650747 kernel: audit: type=1104 audit(1747270770.641:622): pid=5834 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:30.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-10.0.0.134:22-10.0.0.1:46472 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:31.002000 audit[5850]: NETFILTER_CFG table=filter:124 family=2 entries=20 op=nft_register_rule pid=5850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:31.002000 audit[5850]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fffb9500130 a2=0 a3=7fffb950011c items=0 ppid=2416 pid=5850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:31.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:31.011000 audit[5850]: NETFILTER_CFG table=nat:125 family=2 entries=106 op=nft_register_chain pid=5850 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 15 00:59:31.011000 audit[5850]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fffb9500130 a2=0 a3=7fffb950011c items=0 ppid=2416 pid=5850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:31.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 15 00:59:31.876050 kubelet[2211]: E0515 00:59:31.876007 2211 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 00:59:35.644015 systemd[1]: Started sshd@30-10.0.0.134:22-10.0.0.1:46480.service. May 15 00:59:35.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.134:22-10.0.0.1:46480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:35.645154 kernel: kauditd_printk_skb: 7 callbacks suppressed May 15 00:59:35.645198 kernel: audit: type=1130 audit(1747270775.643:626): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.134:22-10.0.0.1:46480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:35.688000 audit[5874]: USER_ACCT pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.689455 sshd[5874]: Accepted publickey for core from 10.0.0.1 port 46480 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:35.692000 audit[5874]: CRED_ACQ pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.693652 sshd[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:35.694047 kernel: audit: type=1101 audit(1747270775.688:627): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.694102 kernel: audit: type=1103 audit(1747270775.692:628): pid=5874 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.697441 systemd-logind[1293]: New session 31 of user core. May 15 00:59:35.698176 systemd[1]: Started session-31.scope. May 15 00:59:35.699309 kernel: audit: type=1006 audit(1747270775.692:629): pid=5874 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 May 15 00:59:35.699362 kernel: audit: type=1300 audit(1747270775.692:629): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3859b260 a2=3 a3=0 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:35.692000 audit[5874]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3859b260 a2=3 a3=0 items=0 ppid=1 pid=5874 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:35.692000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:35.704621 kernel: audit: type=1327 audit(1747270775.692:629): proctitle=737368643A20636F7265205B707269765D May 15 00:59:35.704663 kernel: audit: type=1105 audit(1747270775.701:630): pid=5874 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.701000 audit[5874]: USER_START pid=5874 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.702000 audit[5877]: CRED_ACQ pid=5877 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.712127 kernel: audit: type=1103 audit(1747270775.702:631): pid=5877 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.810921 sshd[5874]: pam_unix(sshd:session): session closed for user core May 15 00:59:35.811000 audit[5874]: USER_END pid=5874 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.813574 systemd[1]: sshd@30-10.0.0.134:22-10.0.0.1:46480.service: Deactivated successfully. May 15 00:59:35.814656 systemd[1]: session-31.scope: Deactivated successfully. May 15 00:59:35.815141 systemd-logind[1293]: Session 31 logged out. Waiting for processes to exit. May 15 00:59:35.815975 systemd-logind[1293]: Removed session 31. May 15 00:59:35.811000 audit[5874]: CRED_DISP pid=5874 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.819894 kernel: audit: type=1106 audit(1747270775.811:632): pid=5874 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.819974 kernel: audit: type=1104 audit(1747270775.811:633): pid=5874 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:35.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-10.0.0.134:22-10.0.0.1:46480 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:37.957125 env[1307]: time="2025-05-15T00:59:37.957070504Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:37.995 [WARNING][5905] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0", GenerateName:"calico-kube-controllers-c78b9db48-", Namespace:"calico-system", SelfLink:"", UID:"5098666b-a231-44ec-9bf5-415e006ee772", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c78b9db48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d", Pod:"calico-kube-controllers-c78b9db48-dl2b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied4c1f0edea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:37.995 [INFO][5905] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:37.995 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" iface="eth0" netns="" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:37.995 [INFO][5905] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:37.995 [INFO][5905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:38.015 [INFO][5916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:38.016 [INFO][5916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:38.016 [INFO][5916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:38.022 [WARNING][5916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:38.022 [INFO][5916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:38.023 [INFO][5916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.027157 env[1307]: 2025-05-15 00:59:38.025 [INFO][5905] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.027619 env[1307]: time="2025-05-15T00:59:38.027173931Z" level=info msg="TearDown network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" successfully" May 15 00:59:38.027619 env[1307]: time="2025-05-15T00:59:38.027195853Z" level=info msg="StopPodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" returns successfully" May 15 00:59:38.028131 env[1307]: time="2025-05-15T00:59:38.028105959Z" level=info msg="RemovePodSandbox for \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:59:38.028240 env[1307]: time="2025-05-15T00:59:38.028204289Z" level=info msg="Forcibly stopping sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\"" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.059 [WARNING][5940] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0", GenerateName:"calico-kube-controllers-c78b9db48-", Namespace:"calico-system", SelfLink:"", UID:"5098666b-a231-44ec-9bf5-415e006ee772", ResourceVersion:"1231", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c78b9db48", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75d0ab4141fde08068ff5e1be57acf162d57b3cbe293c11c3b7ce54943cbdc5d", Pod:"calico-kube-controllers-c78b9db48-dl2b8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied4c1f0edea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.059 [INFO][5940] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.059 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" iface="eth0" netns="" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.059 [INFO][5940] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.059 [INFO][5940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.077 [INFO][5949] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.077 [INFO][5949] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.077 [INFO][5949] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.082 [WARNING][5949] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.082 [INFO][5949] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" HandleID="k8s-pod-network.dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" Workload="localhost-k8s-calico--kube--controllers--c78b9db48--dl2b8-eth0" May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.083 [INFO][5949] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.086435 env[1307]: 2025-05-15 00:59:38.084 [INFO][5940] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc" May 15 00:59:38.087067 env[1307]: time="2025-05-15T00:59:38.086448417Z" level=info msg="TearDown network for sandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" successfully" May 15 00:59:38.095431 env[1307]: time="2025-05-15T00:59:38.095370490Z" level=info msg="RemovePodSandbox \"dab6b14241589c0c0ec8fcce008adf799dd0dda02a8d0ea72bd4560c4594e9dc\" returns successfully" May 15 00:59:38.096004 env[1307]: time="2025-05-15T00:59:38.095949278Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.125 [WARNING][5971] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lg945-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"04816f63-0644-4a43-8b7e-41868b6f8780", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30", Pod:"coredns-7db6d8ff4d-lg945", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b04d6857ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.125 [INFO][5971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.125 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" iface="eth0" netns="" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.125 [INFO][5971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.125 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.145 [INFO][5979] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.145 [INFO][5979] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.145 [INFO][5979] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.150 [WARNING][5979] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.150 [INFO][5979] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.151 [INFO][5979] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.153914 env[1307]: 2025-05-15 00:59:38.152 [INFO][5971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.154741 env[1307]: time="2025-05-15T00:59:38.153931169Z" level=info msg="TearDown network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" successfully" May 15 00:59:38.154741 env[1307]: time="2025-05-15T00:59:38.153975213Z" level=info msg="StopPodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" returns successfully" May 15 00:59:38.154741 env[1307]: time="2025-05-15T00:59:38.154511739Z" level=info msg="RemovePodSandbox for \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:59:38.154741 env[1307]: time="2025-05-15T00:59:38.154546887Z" level=info msg="Forcibly stopping sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\"" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.184 [WARNING][6002] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--lg945-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"04816f63-0644-4a43-8b7e-41868b6f8780", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5d76cb0cf90ff14dc0c2a9e5ac453aae207a6e37b657eecf853ee6852aa0e30", Pod:"coredns-7db6d8ff4d-lg945", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8b04d6857ed", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.184 [INFO][6002] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.184 [INFO][6002] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" iface="eth0" netns="" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.184 [INFO][6002] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.184 [INFO][6002] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.202 [INFO][6011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.202 [INFO][6011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.202 [INFO][6011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.207 [WARNING][6011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.207 [INFO][6011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" HandleID="k8s-pod-network.aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" Workload="localhost-k8s-coredns--7db6d8ff4d--lg945-eth0" May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.208 [INFO][6011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.211576 env[1307]: 2025-05-15 00:59:38.210 [INFO][6002] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087" May 15 00:59:38.211576 env[1307]: time="2025-05-15T00:59:38.211532755Z" level=info msg="TearDown network for sandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" successfully" May 15 00:59:38.215022 env[1307]: time="2025-05-15T00:59:38.215001041Z" level=info msg="RemovePodSandbox \"aa790ae105eda9fc3af13999c33f3cc1daf778245c22d8fc2ea8b3038cf7c087\" returns successfully" May 15 00:59:38.215521 env[1307]: time="2025-05-15T00:59:38.215480066Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.246 [WARNING][6033] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c73ac129-52ad-46f3-b7aa-1b4346bf3d86", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357", Pod:"calico-apiserver-67fbb64cb9-vxtzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1260027e89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.246 [INFO][6033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.246 [INFO][6033] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" iface="eth0" netns="" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.246 [INFO][6033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.246 [INFO][6033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.265 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.266 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.266 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.272 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.272 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.273 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.276708 env[1307]: 2025-05-15 00:59:38.275 [INFO][6033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.277234 env[1307]: time="2025-05-15T00:59:38.276721378Z" level=info msg="TearDown network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" successfully" May 15 00:59:38.277234 env[1307]: time="2025-05-15T00:59:38.276751726Z" level=info msg="StopPodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" returns successfully" May 15 00:59:38.277234 env[1307]: time="2025-05-15T00:59:38.277179262Z" level=info msg="RemovePodSandbox for \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:59:38.277234 env[1307]: time="2025-05-15T00:59:38.277201786Z" level=info msg="Forcibly stopping sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\"" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.310 [WARNING][6067] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"c73ac129-52ad-46f3-b7aa-1b4346bf3d86", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a7e3c0f642100a933bfc5d1fe54b6bbbd365b448ab08523ed02d2f74582a2357", Pod:"calico-apiserver-67fbb64cb9-vxtzh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1260027e89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.311 [INFO][6067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.311 [INFO][6067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" iface="eth0" netns="" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.311 [INFO][6067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.311 [INFO][6067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.330 [INFO][6075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.330 [INFO][6075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.330 [INFO][6075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.335 [WARNING][6075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.335 [INFO][6075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" HandleID="k8s-pod-network.0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--vxtzh-eth0" May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.337 [INFO][6075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.342630 env[1307]: 2025-05-15 00:59:38.339 [INFO][6067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f" May 15 00:59:38.343137 env[1307]: time="2025-05-15T00:59:38.342651192Z" level=info msg="TearDown network for sandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" successfully" May 15 00:59:38.346463 env[1307]: time="2025-05-15T00:59:38.346417763Z" level=info msg="RemovePodSandbox \"0cffcf3fce45c1681483e1b6d364da759e6f7d829e08523cbd7b7bf79c47ae7f\" returns successfully" May 15 00:59:38.347024 env[1307]: time="2025-05-15T00:59:38.346982172Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.386 [WARNING][6097] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88d3eb5f-c3af-435c-afdd-38692e59dcc7", ResourceVersion:"1234", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d", Pod:"calico-apiserver-67fbb64cb9-tnhq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1590e92a9e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.386 [INFO][6097] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.386 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" iface="eth0" netns="" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.386 [INFO][6097] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.386 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.406 [INFO][6105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.406 [INFO][6105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.406 [INFO][6105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.410 [WARNING][6105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.410 [INFO][6105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.411 [INFO][6105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.414183 env[1307]: 2025-05-15 00:59:38.412 [INFO][6097] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.414758 env[1307]: time="2025-05-15T00:59:38.414727020Z" level=info msg="TearDown network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" successfully" May 15 00:59:38.414842 env[1307]: time="2025-05-15T00:59:38.414818316Z" level=info msg="StopPodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" returns successfully" May 15 00:59:38.415409 env[1307]: time="2025-05-15T00:59:38.415369120Z" level=info msg="RemovePodSandbox for \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:59:38.415489 env[1307]: time="2025-05-15T00:59:38.415444064Z" level=info msg="Forcibly stopping sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\"" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.445 [WARNING][6127] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0", GenerateName:"calico-apiserver-67fbb64cb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"88d3eb5f-c3af-435c-afdd-38692e59dcc7", ResourceVersion:"1234", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67fbb64cb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d139148e532e2e4174cde5d3a558630caf176cd30ef23256811d4f00cd7f137d", Pod:"calico-apiserver-67fbb64cb9-tnhq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1590e92a9e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.445 [INFO][6127] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.445 [INFO][6127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" iface="eth0" netns="" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.445 [INFO][6127] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.445 [INFO][6127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.462 [INFO][6136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.462 [INFO][6136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.462 [INFO][6136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.467 [WARNING][6136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.467 [INFO][6136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" HandleID="k8s-pod-network.7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" Workload="localhost-k8s-calico--apiserver--67fbb64cb9--tnhq7-eth0" May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.468 [INFO][6136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.472385 env[1307]: 2025-05-15 00:59:38.469 [INFO][6127] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68" May 15 00:59:38.472385 env[1307]: time="2025-05-15T00:59:38.471291355Z" level=info msg="TearDown network for sandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" successfully" May 15 00:59:38.474720 env[1307]: time="2025-05-15T00:59:38.474688472Z" level=info msg="RemovePodSandbox \"7340dd20e2750a8bdb510245c93763bf9ec75ebb0cb04c0718c9bf246e9aca68\" returns successfully" May 15 00:59:38.475204 env[1307]: time="2025-05-15T00:59:38.475157939Z" level=info msg="StopPodSandbox for \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\"" May 15 00:59:38.475293 env[1307]: time="2025-05-15T00:59:38.475243414Z" level=info msg="TearDown network for sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" successfully" May 15 00:59:38.475293 env[1307]: time="2025-05-15T00:59:38.475280296Z" level=info msg="StopPodSandbox for \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" returns successfully" May 15 00:59:38.475723 env[1307]: time="2025-05-15T00:59:38.475681149Z" level=info msg="RemovePodSandbox for \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\"" May 15 00:59:38.475877 env[1307]: time="2025-05-15T00:59:38.475721076Z" level=info msg="Forcibly stopping sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\"" May 15 00:59:38.475877 env[1307]: time="2025-05-15T00:59:38.475802043Z" level=info msg="TearDown network for sandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" successfully" May 15 00:59:38.479517 env[1307]: time="2025-05-15T00:59:38.479491685Z" level=info msg="RemovePodSandbox \"6b632c242248294de1ce82a10207f83054498dc4bcdb71437dfebf0d32ee7a66\" returns successfully" May 15 00:59:38.479853 env[1307]: time="2025-05-15T00:59:38.479829097Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.510 [WARNING][6158] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pk5fw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"234fff70-d82a-4012-9e49-d23446deada6", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f", Pod:"csi-node-driver-pk5fw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3ccb0317de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.510 [INFO][6158] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.510 [INFO][6158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" iface="eth0" netns="" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.510 [INFO][6158] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.510 [INFO][6158] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.528 [INFO][6166] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.528 [INFO][6166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.528 [INFO][6166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.533 [WARNING][6166] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.533 [INFO][6166] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.534 [INFO][6166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.538123 env[1307]: 2025-05-15 00:59:38.536 [INFO][6158] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.538567 env[1307]: time="2025-05-15T00:59:38.538146425Z" level=info msg="TearDown network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" successfully" May 15 00:59:38.538567 env[1307]: time="2025-05-15T00:59:38.538176693Z" level=info msg="StopPodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" returns successfully" May 15 00:59:38.538844 env[1307]: time="2025-05-15T00:59:38.538797782Z" level=info msg="RemovePodSandbox for \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:59:38.538844 env[1307]: time="2025-05-15T00:59:38.538840945Z" level=info msg="Forcibly stopping sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\"" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.569 [WARNING][6188] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pk5fw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"234fff70-d82a-4012-9e49-d23446deada6", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d59a1a45ede25a708902b761839533fc635151046bc7ab821b357561918b192f", Pod:"csi-node-driver-pk5fw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3ccb0317de", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.570 [INFO][6188] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.570 [INFO][6188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" iface="eth0" netns="" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.570 [INFO][6188] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.570 [INFO][6188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.589 [INFO][6196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.589 [INFO][6196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.589 [INFO][6196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.593 [WARNING][6196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.593 [INFO][6196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" HandleID="k8s-pod-network.232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" Workload="localhost-k8s-csi--node--driver--pk5fw-eth0" May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.594 [INFO][6196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.597475 env[1307]: 2025-05-15 00:59:38.596 [INFO][6188] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b" May 15 00:59:38.597923 env[1307]: time="2025-05-15T00:59:38.597499302Z" level=info msg="TearDown network for sandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" successfully" May 15 00:59:38.601003 env[1307]: time="2025-05-15T00:59:38.600962197Z" level=info msg="RemovePodSandbox \"232fae7ee508151bcdc28aa19480a4ea76909ed2535b7fe0efe738bec2cdd99b\" returns successfully" May 15 00:59:38.601559 env[1307]: time="2025-05-15T00:59:38.601524323Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.629 [WARNING][6220] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98", Pod:"coredns-7db6d8ff4d-2cxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83ab2f3d88c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.630 [INFO][6220] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.630 [INFO][6220] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" iface="eth0" netns="" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.630 [INFO][6220] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.630 [INFO][6220] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.649 [INFO][6228] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.649 [INFO][6228] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.649 [INFO][6228] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.653 [WARNING][6228] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.654 [INFO][6228] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.656 [INFO][6228] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.659169 env[1307]: 2025-05-15 00:59:38.657 [INFO][6220] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.659639 env[1307]: time="2025-05-15T00:59:38.659191645Z" level=info msg="TearDown network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" successfully" May 15 00:59:38.659639 env[1307]: time="2025-05-15T00:59:38.659218468Z" level=info msg="StopPodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" returns successfully" May 15 00:59:38.659748 env[1307]: time="2025-05-15T00:59:38.659661202Z" level=info msg="RemovePodSandbox for \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:59:38.659748 env[1307]: time="2025-05-15T00:59:38.659698004Z" level=info msg="Forcibly stopping sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\"" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.689 [WARNING][6251] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a8b9021c-44c0-4a1b-b21d-74304d9a9ec9", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.May, 15, 0, 57, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d49e666466e8186bb8e4b5902210b8ae5ddca5067e9d9412b70e296437735e98", Pod:"coredns-7db6d8ff4d-2cxpm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali83ab2f3d88c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.689 [INFO][6251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.689 [INFO][6251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" iface="eth0" netns="" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.689 [INFO][6251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.689 [INFO][6251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.708 [INFO][6259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.708 [INFO][6259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.708 [INFO][6259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.712 [WARNING][6259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.712 [INFO][6259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" HandleID="k8s-pod-network.ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" Workload="localhost-k8s-coredns--7db6d8ff4d--2cxpm-eth0" May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.714 [INFO][6259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 15 00:59:38.717134 env[1307]: 2025-05-15 00:59:38.715 [INFO][6251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a" May 15 00:59:38.717580 env[1307]: time="2025-05-15T00:59:38.717155371Z" level=info msg="TearDown network for sandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" successfully" May 15 00:59:38.720492 env[1307]: time="2025-05-15T00:59:38.720461804Z" level=info msg="RemovePodSandbox \"ea0e9a7e3508ce9e1d5ec8c89828e7a6373e20014ef1b9e6f82840917e6eb76a\" returns successfully" May 15 00:59:40.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.134:22-10.0.0.1:57318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:40.814213 systemd[1]: Started sshd@31-10.0.0.134:22-10.0.0.1:57318.service. May 15 00:59:40.815406 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:59:40.815477 kernel: audit: type=1130 audit(1747270780.813:635): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.134:22-10.0.0.1:57318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:40.850000 audit[6266]: USER_ACCT pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.851570 sshd[6266]: Accepted publickey for core from 10.0.0.1 port 57318 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:40.854641 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:40.853000 audit[6266]: CRED_ACQ pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.858015 systemd-logind[1293]: New session 32 of user core. May 15 00:59:40.858737 systemd[1]: Started session-32.scope. May 15 00:59:40.859754 kernel: audit: type=1101 audit(1747270780.850:636): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.859800 kernel: audit: type=1103 audit(1747270780.853:637): pid=6266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.859817 kernel: audit: type=1006 audit(1747270780.853:638): pid=6266 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 May 15 00:59:40.862382 kernel: audit: type=1300 audit(1747270780.853:638): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8cdccf90 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:40.853000 audit[6266]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8cdccf90 a2=3 a3=0 items=0 ppid=1 pid=6266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:40.853000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:40.868288 kernel: audit: type=1327 audit(1747270780.853:638): proctitle=737368643A20636F7265205B707269765D May 15 00:59:40.868320 kernel: audit: type=1105 audit(1747270780.861:639): pid=6266 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.861000 audit[6266]: USER_START pid=6266 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.872921 kernel: audit: type=1103 audit(1747270780.862:640): pid=6269 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.862000 audit[6269]: CRED_ACQ pid=6269 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.994303 sshd[6266]: pam_unix(sshd:session): session closed for user core May 15 00:59:40.994000 audit[6266]: USER_END pid=6266 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.996908 systemd[1]: sshd@31-10.0.0.134:22-10.0.0.1:57318.service: Deactivated successfully. May 15 00:59:40.997825 systemd-logind[1293]: Session 32 logged out. Waiting for processes to exit. May 15 00:59:40.997849 systemd[1]: session-32.scope: Deactivated successfully. May 15 00:59:40.998575 systemd-logind[1293]: Removed session 32. May 15 00:59:40.994000 audit[6266]: CRED_DISP pid=6266 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:41.003877 kernel: audit: type=1106 audit(1747270780.994:641): pid=6266 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:41.003929 kernel: audit: type=1104 audit(1747270780.994:642): pid=6266 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:40.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-10.0.0.134:22-10.0.0.1:57318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:45.997677 systemd[1]: Started sshd@32-10.0.0.134:22-10.0.0.1:57322.service. May 15 00:59:45.999094 kernel: kauditd_printk_skb: 1 callbacks suppressed May 15 00:59:45.999121 kernel: audit: type=1130 audit(1747270785.995:644): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.134:22-10.0.0.1:57322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:45.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.134:22-10.0.0.1:57322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 00:59:46.028000 audit[6305]: USER_ACCT pid=6305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.031100 sshd[6305]: Accepted publickey for core from 10.0.0.1 port 57322 ssh2: RSA SHA256:Iwoz1L9/QgXQ9OpXvCPQYapJE0cmIk+lKxZdEnPdReQ May 15 00:59:46.033327 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 00:59:46.031000 audit[6305]: CRED_ACQ pid=6305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.036740 systemd-logind[1293]: New session 33 of user core. May 15 00:59:46.037437 systemd[1]: Started session-33.scope. May 15 00:59:46.039067 kernel: audit: type=1101 audit(1747270786.028:645): pid=6305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.039113 kernel: audit: type=1103 audit(1747270786.031:646): pid=6305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.039134 kernel: audit: type=1006 audit(1747270786.031:647): pid=6305 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 May 15 00:59:46.031000 audit[6305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee3498df0 a2=3 a3=0 items=0 ppid=1 pid=6305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:46.045292 kernel: audit: type=1300 audit(1747270786.031:647): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffee3498df0 a2=3 a3=0 items=0 ppid=1 pid=6305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 00:59:46.045332 kernel: audit: type=1327 audit(1747270786.031:647): proctitle=737368643A20636F7265205B707269765D May 15 00:59:46.031000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 15 00:59:46.039000 audit[6305]: USER_START pid=6305 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.050827 kernel: audit: type=1105 audit(1747270786.039:648): pid=6305 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.050860 kernel: audit: type=1103 audit(1747270786.040:649): pid=6308 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.040000 audit[6308]: CRED_ACQ pid=6308 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.137787 sshd[6305]: pam_unix(sshd:session): session closed for user core May 15 00:59:46.136000 audit[6305]: USER_END pid=6305 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.140306 systemd[1]: sshd@32-10.0.0.134:22-10.0.0.1:57322.service: Deactivated successfully. May 15 00:59:46.141286 systemd[1]: session-33.scope: Deactivated successfully. May 15 00:59:46.141322 systemd-logind[1293]: Session 33 logged out. Waiting for processes to exit. May 15 00:59:46.142420 systemd-logind[1293]: Removed session 33. May 15 00:59:46.136000 audit[6305]: CRED_DISP pid=6305 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.146345 kernel: audit: type=1106 audit(1747270786.136:650): pid=6305 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.146398 kernel: audit: type=1104 audit(1747270786.136:651): pid=6305 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' May 15 00:59:46.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-10.0.0.134:22-10.0.0.1:57322 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'